Queuing and Queue Management

Slides:



Advertisements
Similar presentations
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Advertisements

Computer Networking Lecture 20 – Queue Management and QoS.
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
Congestion Control Reasons: - too many packets in the network and not enough buffer space S = rate at which packets are generated R = rate at which receivers.
ECE 4450:427/527 - Computer Networks Spring 2015
T. S. Eugene Ngeugeneng at cs.rice.edu Rice University1 COMP/ELEC 429 Introduction to Computer Networks Lecture 16: Congestion control II Slides used with.
CS 4700 / CS 5700 Network Fundamentals Lecture 12: Router-Aided Congestion Control (Drop it like it’s hot) Revised 3/18/13.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
Explicit Congestion Notification ECN Tilo Hamann Technical University Hamburg-Harburg, Germany.
1 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
Congestion Control Reading: Sections COS 461: Computer Networks Spring 2011 Mike Freedman
Analysis and Simulation of a Fair Queuing Algorithm
Spring 2002CS 4611 Congestion Control Outline Queuing Discipline Reacting to Congestion Avoiding Congestion.
ACN: Congestion Control1 Congestion Control and Resource Allocation.
Computer Networking Lecture 17 – Queue Management As usual: Thanks to Srini Seshan and Dave Anderson.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #8 Explicit Congestion Notification (RFC 3168) Limited Transmit.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Queuing and Queue Management Reading: Sections 6.2, 6.4, 6.5 COS 461: Computer Networks Spring 2011 Mike Freedman
Link Scheduling & Queuing COS 461: Computer Networks
Advance Computer Networking L-6 TCP & Routers Acknowledgments: Lecture slides are from the graduate level Computer Networks course thought by Srinivasan.
ACN: RED paper1 Random Early Detection Gateways for Congestion Avoidance Sally Floyd and Van Jacobson, IEEE Transactions on Networking, Vol.1, No. 4, (Aug.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Congestion Control - Supplementary Slides are adapted on Jean Walrand’s Slides.
27th, Nov 2001 GLOBECOM /16 Analysis of Dynamic Behaviors of Many TCP Connections Sharing Tail-Drop / RED Routers Go Hasegawa Osaka University, Japan.
Queueing and Active Queue Management Aditya Akella 02/26/2007.
9.7 Other Congestion Related Issues Outline Queuing Discipline Avoiding Congestion.
CS640: Introduction to Computer Networks Aditya Akella Lecture 20 - Queuing and Basics of QoS.
Spring 2015© CS 438 Staff - University of Illinois1 Next Topic: Vacation Planning UIUC Chicago Monterey San Francisco Chicago to San Francisco: ALL FLIGHTS.
Random Early Detection (RED) Router notifies source before congestion happens - just drop the packet (TCP will timeout and adjust its window) - could make.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
1 Fair Queuing Hamed Khanmirza Principles of Network University of Tehran.
Spring Computer Networks1 Congestion Control Sections 6.1 – 6.4 Outline Preliminaries Queuing Discipline Reacting to Congestion Avoiding Congestion.
Queue Management Mike Freedman COS 461: Computer Networks Lectures: MW 10-10:50am in Architecture N101
Providing QoS in IP Networks
Univ. of TehranIntroduction to Computer Network1 An Introduction Computer Networks An Introduction to Computer Networks University of Tehran Dept. of EE.
1 Lecture 15 Internet resource allocation and QoS Resource Reservation Protocol Integrated Services Differentiated Services.
04/02/08 1 Packet Scheduling IT610 Prof. A. Sahoo KReSIT.
Other Methods of Dealing with Congestion
QoS & Queuing Theory CS352.
Internet Networking recitation #9
Topics discussed in this section:
Chapter 3 outline 3.1 transport-layer services
Congestion Control Outline Queuing Discipline Reacting to Congestion
CS 268: Lecture 6 Scott Shenker and Ion Stoica
Chapter 6 Congestion Avoidance
Queue Management Jennifer Rexford COS 461: Computer Networks
TCP Congestion Control at the Network Edge
Congestion Control: The Role of the Routers
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control Outline Queuing Discipline Reacting to Congestion
Congestion Control and Resource Allocation
CONGESTION CONTROL.
EE 122: Router Support for Congestion Control: RED and Fair Queueing
TCP, XCP and Fair Queueing
ECE 4450:427/527 - Computer Networks Spring 2017
So far, On the networking side, we looked at mechanisms to links hosts using direct linked networks and then forming a network of these networks. We introduced.
Other Methods of Dealing with Congestion
Advance Computer Networking
Advanced Computer Networks
Other Methods of Dealing with Congestion
COS 461: Computer Networks
Internet Networking recitation #10
Advance Computer Networking
Congestion Control Reasons:
TCP Congestion Control
Transport Layer: Congestion Control
Congestion Control and Resource Allocation
EECS 122: Introduction to Computer Networks Packet Scheduling and QoS
Congestion Michael Freedman COS 461: Computer Networks
Lecture 6, Computer Networks (198:552)
Presentation transcript:

Queuing and Queue Management COS 561: Advanced Networking Spring 2012 Mike Freedman http://www.cs.princeton.edu/courses/archive/fall12/cos561/

Outline of Today’s Lecture Router Queuing Models Limitations of FIFO and Drop Tail Scheduling Policies Fair Queuing Drop policies Random Early Detection (of congestion) Explicit Congestion Notification (from routers) Rate Limiting Leaky/Token Bucket algorithm

Router Data and Control Planes Processor control plane data plane Switching Fabric Line card Line card Line card Line card Line card Line card

Packet Switching and Forwarding Link 1, ingress Link 1, egress “4” Choose Egress Link 2 Link 2, ingress Choose Egress Link 2, egress R1 “4” Link 1 Link 3 Link 3, ingress Choose Egress Link 3, egress Link 4 Link 4, ingress Choose Egress Link 4, egress

Router Design Issues Scheduling discipline Drop policy Which packet to send? Some notion of fairness? Priority? Drop policy When should you discard a packet? Which packet to discard? Need to balance throughput and delay Huge buffers minimize drops, but add to queuing delay (thus higher RTT, longer slow start, …)

FIFO Scheduling and Drop-Tail Access to the bandwidth: first-in first-out queue Packets only differentiated when they arrive Access to the buffer space: drop-tail queuing If the queue is full, drop the incoming packet ✗

Problems with tail drop Under stable conditions, queue almost always full Leads to high latency for all traffic Possibly unfair for flows with small windows Larger flows may fast retransmit (detecting loss through Trip Dup ACKs), small flows may have to wait for timeout Window synchronization More on this later… ✗

(Weighted) Fair Queuing (and Class-based Quality of Service) Scheduling Policies (Weighted) Fair Queuing (and Class-based Quality of Service)

Fair Queuing (FQ) Round Robin Service Maintains separate queue per flow Ensures no flow consumes more than its 1/n share Variation: weighted fair queuing (WFQ) If all packets were same length, would be easy If non-work-conserving (resources can go idle), also would be easy, yet lower utilization Flow 1 Round Robin Service Flow 2 Egress Link Flow 3 Flow 4

Fair Queuing Basics Track how much time each flow has used link Compute time used if it transmits next packet Send packet from flow that will have lowest use if it transmits Why not flow with smallest use so far? Because next packet may be huge!

FQ Algorithm: Virtual Finish Time Imagine clock tick per bit, then tx time ~ length Finish time Fi = max (Fi-1, Arrive time Ai ) + Length Pi Calculate estimated Fi for all queued packets Transmit packet with lowest Fi next

FQ Algorithm (2) Problem: Can’t preempt current tx packet Result: Inactive flows (Ai > Fi-1) are penalized Standard algorithm considers no history Each flow gets fair share only when packets queued

FQ Algorithm (3) Approach: give more promptness to flows utilizing less bandwidth historically Bid Bi = max (Fi-1, Ai – δ) + Pi Intuition: with larger δ, scheduling decisions calculated by last tx time Fi-1 more frequently, thus preferring slower flows FQ achieves max-min fairness First priority: maximize the minimum rate of any active flows Second priority: maximize the second min rate, etc.

Uses of (W)FQ Scalability # queues must be equal to # flows But, can be used on edge routers, low speed links, or shared end hosts (W)FQ can be for classes of traffic, not just flows Use IP TOS bits to mark “importance” Part of “Differentiated Services” architecture for “Quality-of-Service” (QoS)

Drop Policy Drop Tail Random Early Detection (RED) Explicit Congestion Notification (ECN)

Bursty Loss From Drop-Tail Queuing TCP depends on packet loss Packet loss is indication of congestion And TCP drives network into loss by additive rate increase Drop-tail queuing leads to bursty loss If link is congested, many packets encounter full queue Thus, loss synchronization: Many flows lose one or more packets In response, many flows divide sending rate in half

Slow Feedback from Drop Tail Feedback comes when buffer is completely full … even though the buffer has been filling for a while Plus, the filling buffer is increasing RTT … making detection even slower Might be better to give early feedback And get 1-2 connections to slow down before it’s too late

Random Early Detection (RED) Basic idea of RED Router notices that queue is getting backlogged … and randomly drops packets to signal congestion Packet drop probability Drop probability increases as queue length increases Else, set drop probability as function of avg queue length and time since last drop Probability Drop 0 1 Average Queue Length

Properties of RED Drops packets before queue is full In the hope of reducing the rates of some flows Drops packet in proportion to each flow’s rate High-rate flows have more packets … and, hence, a higher chance of being selected Drops are spaced out in time Which should help desynchronize the TCP senders Tolerant of burstiness in the traffic By basing the decisions on average queue length

Problems With RED Hard to get tunable parameters just right How early to start dropping packets? What slope for increase in drop probability? What time scale for averaging queue length? RED has mixed adoption in practice If parameters aren’t set right, RED doesn’t help Hard to know how to set the parameters Many other variations in research community Names like “Blue” (self-tuning), “FRED”…

Feedback: From loss to notification Early dropping of packets Good: gives early feedback Bad: has to drop the packet to give the feedback Explicit Congestion Notification Router marks the packet with an ECN bit Sending host interprets as a sign of congestion

Explicit Congestion Notification RFC 3168 (2001) Must be supported by router, sender, AND receiver End-hosts determine if ECN-capable during TCP handshake ECN involves all three parties (and 4 header bits) Sender marks “ECN-capable” in IP ToS field when sending If router sees “ECN-capable” and experiencing congestion, router marks packet as “ECN congestion experienced” in IP If receiver sees “congestion experienced”, marks “ECN echo” TCP header flag in responses until congestion ACK’d If sender sees “ECN echo”, reduces cwnd and marks “congestion window reduced” flag in next TCP packet

Explicit Congestion Notification (2) Why extra “ECN echo” flag for response? Congestion could happen in either direction, want sender to react to forward direction Why ACK “congestion window reduced”? ECN-echo could be lost, but we ideally only respond to congestion in forward direction Why not have router respond to sender directly? Requires slow-path generation of packet

Leaky Bucket / Token Bucket Algorithm Rate Limiting Leaky Bucket / Token Bucket Algorithm

Rate Limiting Send data at rate conforming to limits on both bandwidth (avg rate) and burstiness (variance) Linux token buckets Token added every 1/r secs Bucket can hold at most b tokens (max burst size) When packet arrives of size n, If n tokens in bucket, n tokens removed and packet sent Else, packet non-confirming (queued, dropped, or marked)

Conclusions Congestion is inevitable Internet does not reserve resources in advance TCP actively tries to push the envelope TCP can react to congestion (multiplicative decrease) Active Queue Management can further help Random Early Detection (RED) Explicit Congestion Notification (ECN) Fundamental tensions Feedback from the network? Enforcement of “TCP friendly” behavior? Other scheduling policies (FQ) can given stronger guarantees Enforcement at end-hosts?