Presentation on theme: "Congestion in Data Networks"— Presentation transcript:
1 Congestion in Data Networks Data CommunicationsCongestion inData Networks
2 What Is Congestion?Congestion occurs when the number of packets being transmitted through the network approaches the packet handling capacity of the networkCongestion control aims to keep number of packets below level at which performance falls off dramaticallyData network is a network of queuesGenerally 80% utilization is criticalFinite queues mean data may be lost
5 Effects of CongestionPackets arriving are stored at input buffers (not ATM)Routing decision madePacket moves to output bufferPackets queued for output transmitted as fast as possibleStatistical time division multiplexingIf packets arrive too fast to be routed, or to be output, buffers will fillCan discard packetsCan use flow controlCan propagate congestion through network
6 Ideal PerformanceTop: As load increases, throughput directly increasesMiddle: As load increases, delay increasesBottom: Power is ratio of throughput to delay
7 Practical Performance Ideal assumes infinite buffers and no overheadUnfortunately, buffers are finiteAdditional overhead occurs in exchanging congestion control messages
9 Congestion Control Objectives In general, we want to:Minimize discardsMaintain agreed QoS (if applicable)Minimize probability of one end user monopoly over other end usersSimple to implementLittle overhead on network or userCreate minimal additional trafficDistribute resources fairlyLimit spread of congestionOperate effectively regardless of traffic flowMinimum impact on other systemsMinimize variance in QoS
10 Basic Mechanisms for Congestion Control Open-Loop Congestion Control (rely on other layers for feedbackand control)Retransmission policy - a good policy can reduce congestionWindow policy - sel-reject better than go-back-N; use a biggerwindow sizeAcknowledgment policy - don’t ack each packet individuallyDiscard policy - a good policy by routers may preventcongestion and at the same time may not harm the integrityof the transmissionAdmission policy - QOS mechanism
11 Basic Mechanisms for Congestion Control Closed-Loop Congestion ControlBackpressure - when a router is congested, it informs theprevious upstream router to reduce the rate of outgoingpacketsChoke packet of choke point - sent by router to source, similarto ICMP’s source quench packetImplicit signaling - look for delay in some other actionExplicit signaling - router sends an explicit signalBackward signaling - bit is set in packet moving in thedirection opposite to the congestionForward signaling - bit is set in packet moving in the directionof congestion. Receiver can use policy such as slowingdown acks to alleviate congestion
12 Basic Mechanisms for Congestion Control (visual examples)
13 BackpressureIf node becomes congested it can slow down or halt flow of packets from other nodesMay mean that other nodes have to apply control on incoming packet ratesPropagates back to sourceCan restrict to logical connections generating most trafficUsed in connection oriented that allow hop by hop congestion control (e.g. X.25)Not used in ATM or frame relayOnly recently developed for IPv6 (PRI field)
14 Choke Packet Control packet Rather crude mechanism Generated at congested nodeSent to source nodee.g. ICMP source quenchFrom router or destinationSource cuts back until no more source quench messageSent for every discarded packet, or anticipatedRather crude mechanism
15 Implicit Congestion Signaling Transmission delay may increase with congestionPackets may be discardedSource can detect these as implicit indications of congestion (source is responsible, not network)Useful on connectionless (datagram) networkse.g. IP based (TCP includes congestion and flow control)Used in frame relay LAPF
16 Explicit Congestion Signaling Network alerts end systems of increasing congestionUsed on connection-oriented networksEnd systems take steps to reduce offered loadBackwardsCongestion avoidance info sent in opposite direction of packet travelForwardsCongestion avoidance info sent in same direction as packet travel - when end system receives info, either sends it back to source or hands it to higher layer to take action
17 Categories of Explicit Signaling BinaryA bit set in a packet indicates congestionCredit basedIndicates how many packets source may sendCommon for end to end flow controlRate basedThe source may transmit data at a rate up to the set limitAny node along the path of the connection can reduce the data rate limit in a control message to the sourcee.g. ATM
18 How Does TCP Handle / Avoid Congestion How Does TCP Handle / Avoid Congestion? (details in TDC 365 and TDC 463)To Handle: TCP has a sender window size. Sender window size is minimum of receiver window size or network congestion window size.To Avoid: TCP can use Slow Start and Additive Increase - at beginning, TCP sets congestion window size to maximum segment size, then increases window size with each ack.Can also use Multiplicative Decrease - after timeout, threshold set to 1/2 previous threshold, and congestion window size reset to 1 (then slow start)
20 How Does Frame Relay Handle Congestion? Connection management, coupled withDiscard strategyExplicit signalingImplicit signalingIn more detail:
21 Connection Management Before a frame relay network allows a user to transmit data, they agree on a connectionSome call this an SLA (service level agreement)Frame relay calls it the CIR (committed information rate)Committed burst size - max amount of data the network agrees to transfer, under normal conditionsExcess burst size - max amount of data in excess of committed burst sizeDifferent frame relay companies have different agreements
22 Connection Management What happens if you exceed your CIR and the network experiences congestion?Frame relay may start discarding your frames (Discard Eligible bit = 1) (Discard Strategy)Does frame relay tell you that your frames are being tossed?No. Frame relay assumes a higher layer protocol (such as TCP) will monitor lost or missing framesFrame relay could discard arbitrarily with no regard for source, but then no reward for restraint so end systems transmit as fast as possibleCIR not 100% guaranteed, but network tries hardAggregate CIR should not exceed physical data rate
25 Explicit Signaling Network alerts end systems of growing congestion Backward explicit congestion notificationNotifies the user that CA procedures should be initiated for traffic in the opposite direction; simplerForward explicit congestion notificationNotification goes forward, so end user has to somehow get signal back to other end to tell them to slow downFrame handler monitors its queuesMay notify some or all logical connectionsUser responseReduce rate
29 Implicit Signaling Implicit Congestion Notification Telecom Definition In frame relay, inference by user equipment that congestion has occurred in the network. The inference is triggered by realization of the receiving frame relay access device (FRAD) of transmission delays. Based on block, frame or packet sequence numbers, another protocol may recognize that one or more frames have been lost in transit. Control mechanisms at the upper protocol layers of the end devices then deal with frame loss by requesting retransmissions.From: YourDictionary.com
30 What About ATM? High speed, small cell size, limited overhead bits Requirements (difficult)Majority of traffic not amenable to flow controlFeedback slow due to reduced transmission time compared with propagation delayWide range of application demandsDifferent traffic patternsDifferent network servicesHigh speed switching and transmission increases volatility
31 Latency/Speed Effects Consider a typical ATM transmission speed of 150Mbps~2.8x10-6 seconds to insert single cellTime to traverse network depends on propagation delay, switching delayAssume propagation at two-thirds speed of lightIf source and destination on opposite sides of USA, propagation time ~ 48x10-3 secondsGiven implicit congestion control, by the time dropped cell notification has reached source, 7.2x106 bits have been transmittedSo, this is not a good strategy for ATM
32 Cell Delay Variation For ATM voice/video, data is a stream of cells Delay across network must be shortAND Rate of delivery must be constantThere will always be some variation in transitDelay cell delivery to application so that constant bit rate can be maintained to application
33 Time Re-assembly of CBR Cells D(i)=end to end delay of ith cell V(0)= estimate of amount of cell delay variation that an application can tolerate
34 Various Network Contributions to Cell Delay Variation Packet switched networksQueuing delaysRouting decision timeFrame relayAs above but to lesser extentATMLess than frame relayATM protocol designed to minimize processing overheads at switchesATM switches have very high throughputOnly noticeable delay is from congestionMust not accept load that causes congestion
35 Cell Delay Variation At The UNI in ATM Even if application produces data at fixed rate, processing at (potentially) three layers of ATM causes delayInterleaving cells from different connectionsOperation and maintenance signals need to be interleavedIf using synchronous digital hierarchy frames, potential delays here are inserted at physical layerCan not predict these delays(See figure next slide)
37 Traffic and Congestion Control Objectives for ATM ATM layer traffic and congestion control should support QoS classes for all foreseeable network servicesATM layer traffic and congestion control should not rely on AAL protocols that are network specific, nor on higher level application specific protocolsAny traffic and congestion controls should minimize network and end to end system complexity
38 Traffic Management and Congestion Control Techniques ITU-T and ATM Forum have defined a range of traffic management functions to maintain the QoS of ATM connections:Resource management using virtual paths - separate traffic flow according to service characteristics (1)Connection admission control (2)Usage parameter control (3)Traffic shaping (4)Let’s examine these in more detail
39 Resource Management Using Virtual Paths (1) ATM network can use the virtual path to group similar virtual channels
40 Connection Admission Control (2) Good first line of defenseUser specifies traffic characteristics for new connection (VCC or VPC) by selecting a QoSNetwork accepts connection only if it can meet the demandTraffic contractPeak cell rate - upper bound, CBR and VBRCell delay variation - CBR and VBRSustainable cell rate - average rate, VBRBurst tolerance - VBR
41 Usage Parameter Control (3) Monitor established connection to ensure traffic conforms to contractProtection of network resources from overload by one connectionDone on VCC and VPCControl of peak cell rate and cell delay variation, orControl of sustainable cell rate and burst toleranceDiscard cells that do not conform to traffic contractCalled traffic policing
42 Traffic Shaping (4) Smooth out traffic flow and reduce cell clumping Token bucket and leaky bucket are examples of traffic shapingToken bucket allows bursts, while leaky bucket maintains an even flow(See figures next slides)
46 Leaky bucket keeps an average flow moving. Queue overflows? Figure Leaky bucket implementationLeaky bucket keeps an average flow moving. Queue overflows?Discard packets. Unlike token bucket, no credit for no transmissions.
47 ATM’s Real-time Traffic Management QoS provided for CBR, and rt-VBR is based on a traffic contract (connection admission control) and UBC (usage parameter control)There is no feedback in these systems. Cells are simply discarded. This is called open-loop control.This is not used for ABR or UBR traffic.
48 Non-real-time Traffic Management Some applications (Web, file transfer) do not have well defined traffic characteristicsBest effortsAllow these applications to share unused capacityIf congestion builds, cells are dropped, eg UBRClosed loop controlABR connections share available capacityShare varies between minimum cell rate (MCR) and peak cell rate (PCR)ABR flow limited to available capacity by feedbackBuffers absorb excess traffic during feedback delayLow cell loss
49 Feedback Mechanisms Transmission rate characteristics: Allowed cell rateMinimum cell ratePeak cell rateInitial cell rateStart with ACR=ICRAdjust ACR based on feedback from networkResource management cellsCongestion indication (CI) bitNo increase (NI) bitExplicit cell rate (ER) field
52 23.7 Integrated Services Integrated Services (IntServ) is a model used to provide QoS in the Internetat the IP layer.IntServ is a flow-based model, in thata user needs to create a flow or virtualcircuit between source and destination.But IP is connectionless. How do youcreate a connection? Use RSVP.
53 Path messages are sent from sender (S1) to all receivers Figure Path messagesPath messages are sent from sender (S1) to all receivers(multiple if multi-cast). This establishes the path.
54 Once the path is set, receivers return Resv messages. Note Figure Resv messagesOnce the path is set, receivers return Resv messages. Notehow reservations are merged (next slide).
55 R3 takes the larger of the two reservations and sends that upstream. Figure Reservation mergingR3 takes the larger of the two reservations and sends thatupstream.
56 Wild card filter - the router creates a single reservation for all Figure Reservation stylesWild card filter - the router creates a single reservation for allthe senders, based on the largest request.Fixed filter - the router creates a distinct reservation for eachflow.Shared explicit - the router creates a single reservation whichcan be shared by a set of flows.
57 23.8 Differentiated Services An alternative to Integrated Services.Produced by the IETF to create a class-based QoS modelfor IP.Beyond the scope of this class.
58 Meter checks to see if incoming flow matches negotiated traffic Figure Traffic conditionerMeter checks to see if incoming flow matches negotiated trafficprofile. Marker can re-mark a packet that is using best-effortdelivery or down-mark a packet based on info received from meter.Shaper uses the info received from meter to reshape the traffic.Dropper works like a shaper with no buffer, discarding packets ifthe flow severely violates the negotiated profile.