Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative.

Similar presentations


Presentation on theme: "Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative."— Presentation transcript:

1 Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative solutions

2 Wide Design Space l Router based n DECbit, Fair queueing, RED l Control theory n packet pair, TCP Vegas l ATM n rate control, credits l economics and pricing

3 Standard “Drop Tail” Router l “First in, first out” schedule for outputs l Drop any arriving packet if no room n no explicit congestion signal l Problems: n If send more packets, get more service n synchronization: free buffer => host send n more buffers can actually increase congestion

4 Router Solutions l Modify both router and hosts n DECbit -- congestion bit in packet header l Modify router, hosts use TCP n Fair queueing –per-connection buffer allocation n RED -- Random early detection –drop packet or set bit in packet header

5 DECbit routers l Router tracks average queue length n regeneration cycle: queue goes from empty to non-empty to empty n average from start of previous cycle n If average > 1, router sets bit for flows sending more than their share n If average > 2, router sets bit in every packet n bit can be set by any router in path l Acks carry bit back to source

6 DECbit source l Source averages across acks in window n congestion if > 50% of bits set n will detect congestion earlier than TCP l Additive increase, multiplicative decrease n Decrease factor = 0.875 (7/8 vs. TCP 1/2) n After change, ignore DECbit for packets in flight (vs. TCP ignore other drops in window) l No slow start

7 Random Early Detection l Goal: improve TCP performance with minimal hardware changes n avoid TCP synchronization effects n decouple buffer size from congestion signal l Compute average queue length n exponentially weighted moving average n If avg > low threshold, drop with low prob n If avg > high threshold, drop all

8 Max-min fairness l At a single router n Allocate bandwidth equally among all users n If anyone doesn’t need share, redistribute n maximize the minimum bandwidth provided to any flow not receiving its request l Network-wide fairness n If sources send at minimum (max-min) rate along path n What if rates are changing?

9 Implementing max-min fairness l General processor sharing n Per-flow queueing n Bitwise round robin among all queues l Why not simple round robin? n Variable packet length => can get more service by sending bigger packets n Unfair instantaneous service rate –what if arrive just before/after packet departs?

10 Fair Queueing l Goals n allocate resources equally among all users n low delay for interactive users n protection against misbehaving users l Approach: simulate general processor sharing (bitwise round robin) n need to compute number of competing flows, at each instant

11 Scheduling Background l How do you minimize avg response time? n By being unfair: shortest job first l Example: equal size jobs, start at t=0 n round robin => all finish at same time n FIFO => minimizes avg response time l Unequal size jobs n round robin => bad if lots of jobs n FIFO => small jobs delayed behind big ones

12 Resource Allocation via Pricing l Internet has flat rate pricing n queueing delay = implicit price n no penalty for being a bad citizen l Alternative: usage-based pricing n multiple priority levels with different prices n users self-select based on price sensitivity, expected quality of service –high priority for interactive jobs –low priority for background file transfers

13 Congestion Control Classification l Explicit vs. implicit state measurement n explicit: DECbit, ATM rates, credits n implicit: TCP, packet-pair l Dynamic window vs. dynamic rate n window: TCP, DECbit, credits n rate: packet-pair, ATM rates l End to end vs. hop by hop n end to end: TCP, DECbit, ATM rates n hop by hop: credits, hop by hop rates

14 Packet Pair l Implicit, dynamic rate, end to end l Assume fair queueing at all routers l Send all packets in pairs n bottleneck router will separate packet pair at exactly fair share rate l Average rate across pairs (moving avg) l Set rate to achieve desired queue length at bottleneck

15 TCP Vegas l Implicit, dynamic window, end to end l Compare expected to actual throughput n expected = window size / round trip time n actual = acks / round trip time l If actual decrease rate before packet drop l If actual > expected, queues decreasing => increase rate

16 ATM Forum Rate Control l Explicit, dynamic rate, end to end l Periodically send rate control cell n switches in path provide min fair share rate n immediate decrease, additive increase n if source goes idle, go back to initial rate n if no response, multiplicative decrease n fair share computed from –observed rate –rate info provided by host in rate control cell

17 ATM Forum Rate Control l If switches don’t support rate control n switches set congestion bit (as in DECbit) n exponential decrease, additive increase n interoperability prevents immediate increase even when switches support rate control l Hosts evenly space cells at defined rate n avoids short bursts (would foil rate control) n hard to implement if multiple connections per host

18 Hop by Hop Rate Control l Explicit, dynamic rate, hop by hop l Each switch measures rate packets are departing, per flow n switch sends rate info upstream n upstream switch throttles rate to reach target downstream buffer occupancy l Advantage is shorter control loop

19 Hop by Hop Credits l Explicit, dynamic window, hop by hop l Never send packet without buffer space n Downstream switch sends credits as packets depart n Upstream switch counts downstream buffers l With FIFO queueing, head of line blocking n buffers fill with traffic for bottleneck n through traffic waits behind bottleneck

20 Head of Line Blocking CrossbarCrossbar aaaaa bbbbb ececec ddddd adcba e e

21 Avoiding Head of Line Blocking l Myrinet: make network faster than hosts l AN2: per-flow queueing l Static buffer space allocation? n Link bandwidth * latency per flow l Dynamic buffer allocation n more buffers for higher rate flows n what if flow starts and stops? –Internet traffic is self-similar => highly bursty

22 TCP vs. Rates vs. Credits l What would it take for web response to take only a single RTT? n Today: if send all at once => more losses

23 Sharing congestion information l Intra-host sharing n Multiple web connections from a host n [Padmanabhan98, Touch97] l Inter-host sharing n For a large server farm or a large client population n How much potential is there?

24 Destination locality

25 Sharing Congestion Information Internet Subnet Enterprise/Campus Network Border Router Congestion Gateway

26 Time to Rethink? End to end principle

27 Multicast Preview l Send to multiple receivers at once n broadcasting, narrowcasting n telecollaboration n group coordination l Revisit every aspect of networking n Routing n Reliable delivery n Congestion control


Download ppt "Lecture 5: Congestion Control l Challenge: how do we efficiently share network resources among billions of hosts? n Last time: TCP n This time: Alternative."

Similar presentations


Ads by Google