Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS.

Similar presentations


Presentation on theme: "1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS."— Presentation transcript:

1 1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS

2 2 Each TCP flow gets equal bandwidth

3 3 Our Goal: enable bandwidth apportionment among TCP flows in a best-effort network

4 4 Transparency : – No network support: ISPs, routers, gateways, etc. Clients unmodified –TCP-friendliness “Total” bandwidth should be the same

5 5 Why is it so hard? Fair share of a TCP flow keeps changing dynamically with time. Server Client bottleneck Lot of cross-traffic

6 6 Why not open extra TCP flows ? pTCP scheme [Sivakumar et. al.] –Open more TCP flows for a high-priority application Resulting behavior is unfriendly to the network Large number of flows active at a bottleneck lead to significant unfairness in TCP

7 7 Why not modify the AIMD parameters? mulTCP scheme [ Crowcroft et. al. ] –Use different AIMD parameters for each flow Increase more aggressively on successful transmission. Decrease more conservatively on packet loss.  Unfair to the background traffic  Does not scale to larger differentials –Large number of timeouts –Two mulTCP flows running together try to “compete” with each other

8 8 Properties of MPAT Key insight: send the packets of one flow through the open congestion window of another flow. Scalability –Substantial differentiation between flows (demonstrated up to 95:1) –Hold fair share (demonstrated up to 100 flows) Adaptability –Changing performance requirements –Transient network congestion Transparency –Changes only at the server side –Friendly to other flows

9 9 MPAT: an illustration Server Unmodified client Congestion window Target 4:1 Flow158 Flow252 Total congestion window = 10

10 10 MPAT: transmit processing Send three additional red packets through the congestion window of blue flow. TCP1 cwnd 6 7 8 1 2 TCP2 cwnd 1 2 3 4 5

11 11 MPAT: implementation New variable: MPAT window Actual window = min ( MPAT window, recv window) Map each outgoing packet to one of the congestion windows. SeqnoCongestion window 1Red 2 3 4 5 6Blue 7 8 1 2 Maintain a virtual mapping

12 12 Incoming Acks For every ACK received on a flow, update the congestion window through which that packet was sent. MPAT: receive processing TCP1 cwnd 1 2 3 4 5 6 7 8 1 2 TCP2 cwnd Seqnowindow 1Red 2 3 4 5 6Blue 7 8 1 2 1 2 7 8 1 2...

13 13 TCP-friendliness Invariant: Each congestion window experiences the same loss rate.

14 14 MPAT decouples reliability from congestion control Red flow is responsible for the reliability of all red packets. – (e.g. buffering, retransmission, etc. ) Does not break the “end-to-end” principle.

15 15 Experimental Setup Wide-area network test-bed Planet-lab Experiments over the real internet User-level TCP implementation Unconstrained buffer at both ends Goal: Test the fairness and scalability of MPAT

16 16 MPAT can apportion available bandwidth among its flows, irrespective of the total fair share Bandwidth Apportionment

17 17 Scalability of MPAT 95 times differential achieved in experiments 95

18 18 Responsiveness MPAT adapts itself very quickly to dynamically changing performance requirements

19 19 Fairness 1.6 16 MPAT flows Target ratio: 1 : 2 : 3 : … : 15 : 16 10 standard TCP flows in background

20 20 Applicability in real world Deployment: –Enterprise network –Grid applications Gold vs Silver customers Background transfers

21 21 Sample Enterprise network (runs over the best-effort Internet) San Jose (database server) Zurich (transaction server) New York (web server) New Delhi (application server)

22 22 Background transfers Data that humans are not waiting for –Non-deadline-critical Examples –Pre-fetched traffic on the Web –File system backup –Large-scale data distribution services –Background software updates –Media file sharing Grid Applications

23 23 Future work Benefit short flows: –Map multiple short flows onto a single long flow –Warm start Middle box –Avoid changing all the senders Detect shared congestion: –Subnet-based aggregation

24 24 Conclusions MPAT is a very promising approach for bandwidth apportionment Highly scalable and adaptive: Substantial differentiation between flows (demonstrated up to 95:1) Adapts very quickly to transient network congestion Transparent to the network and clients: Changes only at the server side Friendly to other flows

25 25 Extra slides…

26 26 MPAT exhibits much lower variance in throughput than mulTCP Reduced variance

27 27 Multiple MPAT aggregates “cooperate” with each other Fairness across aggregates

28 28 Multiple MPAT aggregates running simultaneously cooperate with each other

29 29 Congestion Manager (CM) CM Congestion controller Scheduler Per-”aggregate” statistics (cwnd, ssthresh, rtt, etc) Per-flow scheduling Flow integration Sender TCP4TCP1TCP2TCP3 Receiver API Callbacks Data Feedback An end-system architecture for congestion management. CM abstracts all congestion-related info into one place. Separates reliability from congestion control. An end-system architecture for congestion management. CM abstracts all congestion-related info into one place. Separates reliability from congestion control. Goal: To ensure fairness

30 30 Issues with CM Congestion Manager Unfair allocation of bandwidth to CM flows CM maintains one congestion window per “aggregate” TCP1 TCP2 TCP3 TCP4 TCP5

31 31 mulTCP Goal: Design a mechanism to give N times more bandwidth to one flow over another. TCP throughput = f(α, β) / (rtt *sqrt(p)) α: additive increase factor β: multiplicative decrease factor p: loss probability rtt: round-trip time Set α = N and β = 1 - 1/(2N) Increase more aggressively on successful transmission. Decrease more conservatively on packet loss.  Does not scale with N Loss process induced is much different from that of N standard TCP flows. Unstable controller as N increases.

32 32 Gain in throughput of mulTCP

33 33 Drawbacks of mulTCP Does not scale with N Large number of timeouts The loss process induced by a single mulTCP flow is much different Increased variance with N Amplitude increases with N Unstable controller as N grows Two mulTCP flows running together try to “compete” with each other

34 34 TCP Nice Two-level prioritization scheme Only give less bandwidth to low-priority applications Cannot give more bandwidth to deadline- critical jobs


Download ppt "1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS."

Similar presentations


Ads by Google