Congestion control for Multipath TCP (MPTCP) Damon Wischik Costin Raiciu Adam Greenhalgh Mark Handley THE ROYAL SOCIETY.

Slides:



Advertisements
Similar presentations
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Advertisements

Reconsidering Reliable Transport Protocol in Heterogeneous Wireless Networks Wang Yang Tsinghua University 1.
Congestion Control and Fairness Models Nick Feamster CS 4251 Computer Networking II Spring 2008.
Data Center Networking with Multipath TCP
Improving Datacenter Performance and Robustness with Multipath TCP
Ramin Khalili (T-Labs/TUB) Nicolas Gast (LCA2-EPFL)
TCP--Revisited. Background How to effectively share the network? – Goal: Fairness and vague notion of equality Ideal: If N connections, each should get.
MPTCP is not Pareto- Optimal Performance Issues and a possible solution B 吳昇峰.
Restless bandits and congestion control Mark Handley, Costin Raiciu, Damon Wischik UCL.
Multipath TCP Costin Raiciu University Politehnica of Bucharest Joint work with: Mark Handley, Damon Wischik, University College London Olivier Bonaventure,
TDTS21 Advanced Networking
Improving Datacenter Performance and Robustness with Multipath TCP Costin Raiciu, Sebastien Barre, Christopher Pluntke, Adam Greenhalgh, Damon Wischik,
What mathematicians should 1 know about the Internet: a case study.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
Mathematical models of the Internet Frank Kelly Hood Fellowship Public Lecture University of Auckland 3 April 2012.
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Resource Pooling A system exhibits complete resource pooling if it behaves as if there was a single pooled resource. The Internet has many mechanisms for.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
XCP: Congestion Control for High Bandwidth-Delay Product Network Dina Katabi, Mark Handley and Charlie Rohrs Presented by Ao-Jan Su.
June 3, A New Multipath Routing Protocol for Ad Hoc Wireless Networks Amit Gupta and Amit Vyas.
Congestion control in data centers
Leveraging Multiple Network Interfaces for Improved TCP Throughput Sridhar Machiraju, Prof. Randy Katz.
Introduction Future wireless systems will be characterized by their heterogeneity - availability of multiple access systems in the same physical space.
Rethinking Internet Traffic Management: From Multiple Decompositions to a Practical Protocol Jiayue He Princeton University Joint work with Martin Suchara,
1 Manpreet Singh, Prashant Pradhan* and Paul Francis * MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS.
Combining Multipath Routing and Congestion Control for Robustness Peter Key.
L13: Sharing in network systems Dina Katabi Spring Some slides are from lectures by Nick Mckeown, Ion Stoica, Frans.
Multipath Protocol for Delay-Sensitive Traffic Jennifer Rexford Princeton University Joint work with Umar Javed, Martin Suchara, and Jiayue He
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Routing of Outgoing Packets with MP-TCP draft-handley-mptcp-routing-00 Mark Handley Costin Raiciu Marcelo Bagnulo.
Second year review Resource Pooling Damon Wischik, UCL.
TCP & Data Center Networking
Advanced Network Architecture Research Group 2001/11/149 th International Conference on Network Protocols Scalable Socket Buffer Tuning for High-Performance.
The teleology of Internet congestion control Damon Wischik, Computer Science, UCL.
Multipath TCP design, and application to data centers Damon Wischik, Mark Handley, Costin Raiciu, Christopher Pluntke.
INFOCOM A Receiver-Driven Bandwidth Sharing System (BWSS) for TCP Puneet Mehra, Avideh Zakhor UC Berkeley, USA Christophe De Vleeschouwer Université.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
Transport over Wireless Networks Myungchul Kim
Advanced Network Architecture Research Group 2001/11/74 th Asia-Pacific Symposium on Information and Telecommunication Technologies Design and Implementation.
Congestion Control for High Bandwidth-Delay Product Networks D. Katabi (MIT), M. Handley (UCL), C. Rohrs (MIT) – SIGCOMM’02 Presented by Cheng.
TCP Trunking: Design, Implementation and Performance H.T. Kung and S. Y. Wang.
CISC856 University of Delaware
Networking Fundamentals. Basics Network – collection of nodes and links that cooperate for communication Nodes – computer systems –Internal (routers,
Some questions about multipath Damon Wischik, UCL Trilogy UCL.
Improving TCP Performance over Wireless Networks
Thoughts on the Evolution of TCP in the Internet (version 2) Sally Floyd ICIR Wednesday Lunch March 17,
The Macroscopic behavior of the TCP Congestion Avoidance Algorithm.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Network Coding and Reliable Communications Group Modeling Network Coded TCP Throughput: A Simple Model and its Validation MinJi Kim*, Muriel Médard*, João.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
An Analysis of AIMD Algorithm with Decreasing Increases Yunhong Gu, Xinwei Hong, and Robert L. Grossman National Center for Data Mining.
1 Three ways to (ab)use Multipath Congestion Control Costin Raiciu University Politehnica of Bucharest.
6.888: Lecture 4 Data Center Load Balancing Mohammad Alizadeh Spring
Congestion Control for High Bandwidth-Delay Product Networks Dina Katabi, Mark Handley, Charlie Rohrs Presented by Yufei Chen.
Delay-based Congestion Control for Multipath TCP Yu Cao, Mingwei Xu, Xiaoming Fu Tsinghua University University of Goettingen.
By, Nirnimesh Ghose, Master of Science,
Multipath TCP and the Resource Pooling Principle
Improving Datacenter Performance and Robustness with Multipath TCP
Traffic Engineering with AIMD in MPLS Networks
Improving Datacenter Performance and Robustness with Multipath TCP
Multipath QUIC: Design and Evaluation
Multipath TCP Yifan Peng Oct 11, 2012
ECF: an MPTCP Scheduler to Manage Heterogeneous Paths
COS 561: Advanced Computer Networks
AMP: A Better Multipath TCP for Data Center Networks
Study of performance of regular TCP in MANETs (using simulator).
TCP Congestion Control
Resource Pooling A system exhibits complete resource pooling if it behaves as if there was a single pooled resource. I propose ‘extent of resource pooling’
Presentation transcript:

Congestion control for Multipath TCP (MPTCP) Damon Wischik Costin Raiciu Adam Greenhalgh Mark Handley THE ROYAL SOCIETY

Packet switching ‘pools’ circuits. Multipath ‘pools’ links : it is Packet Switching 2.0. TCP controls how a link is shared. How should a pool be shared? What is TCP 2.0? Two circuitsA link Two separate links A pool of links 2

In a BCube data center, can we use multipath to get higher throughput? 3 Initially, there is one flow.

In a BCube data center, can we use multipath to get higher throughput? 4 Initially, there is one flow. A new flow starts. Its direct route collides with the first flow.

In a BCube data center, can we use multipath to get higher throughput? 5 Initially, there is one flow. A new flow starts. Its direct route collides with the first flow. But it also has longer routes available, which don’t collide.

How can a wireless device use two channels simultaneously, without hurting other users of the network? How should it balance its traffic, when the channels have very different characteristics? 6 wifi path: high loss, small RTT 3G path: low loss, high RTT

What were our design goals for MPTCP congestion control? I will propose design goals for Multipath TCP, and illustrate them in simple scenarios. I will show experimental results for our MPTCP algorithm in these simple scenarios. 7

What is the MPTCP protocol? MPTCP is a replacement for TCP which lets you use multiple paths simultaneously. 8 TCP IP user space socket API MPTCP addr 1 addr 2 addr The sender stripes packets across paths The receiver puts the packets in the correct order

Design goal 1: Multipath TCP should be fair to regular TCP at shared bottlenecks To be fair, Multipath TCP should take as much capacity as TCP at a bottleneck link, no matter how many paths it is using. Strawman solution: Run “½ TCP” on each path A multipath TCP flow with two subflows Regular TCP 9

Design goal 2: MPTCP should use efficient paths Each flow has a choice of a 1-hop and a 2-hop path. How should split its traffic? Each flow has a choice of a 1-hop and a 2-hop path. How should split its traffic? 12Mb/s 10

Design goal 2: MPTCP should use efficient paths If each flow split its traffic 1:1... 8Mb/s 12Mb/s 11

Design goal 2: MPTCP should use efficient paths If each flow split its traffic 2:1... 9Mb/s 12Mb/s 12

Design goal 2: MPTCP should use efficient paths If each flow split its traffic 4: Mb/s 12Mb/s 13

Design goal 2: MPTCP should use efficient paths If each flow split its traffic ∞: Mb/s 14

Design goal 2: MPTCP should use efficient paths 12Mb/s Theoretical solution ( Kelly+Voice 2005; Han, Towsley et al. 2006) MPTCP should send all its traffic on its least-congested paths. Theorem. This will lead to the most efficient allocation possible, given a network topology and a set of available paths. 15

MPTCP chooses efficient paths in a BCube data center, hence it gets high throughput. 16 Initially, there is one flow. A new flow starts. Its direct route collides with the first flow. But it also has longer routes available, which don’t collide. MPTCP shifts its traffic away from the congested link.

MPTCP chooses efficient paths in a BCube data center, hence it gets high throughput. 17 We ran packet-level simulations of BCube (125 hosts, 25 switches, 100Mb/s links) and measured average throughput, for three traffic matrices. For two of the traffic matrices, MPTCP and ½ TCP (strawman) were as good. For one of the traffic matrices, MPTCP got 19% higher throughput. perm. traffic matrix sparse traffic matrix local traffic matrix throughput [Mb/s]

Design goal 3: MPTCP should be fair compared to TCP Design Goal 2 says to send all your traffic on the least congested path, in this case 3G. But this has high RTT, hence it will give low throughput. c d wifi path: high loss, small RTT 3G path: low loss, high RTT Goal 3a. A Multipath TCP user should get at least as much throughput as a single-path TCP would on the best of the available paths. Goal 3b. A Multipath TCP flow should take no more capacity on any link than a single-path TCP would. 18

wifi through- put [Mb/s] 3G through- put [Mb/s] time [min] User in his office, using wifi and 3G Going downstairs In the kitchen MPTCP gives fair throughput. 19

wifi through- put [Mb/s] 3G through- put [Mb/s] time [min] 3G has lower loss rate. Design Goal 2 says to shift traffic onto 3G... But, today, TCP over 3G was only getting 0.4Mb/s, so don’t take more than that … But, today, TCP over wifi was getting 2.2Mb/s, so user is entitled to this much… MPTCP gives fair throughput. MPTCP sends 0.4Mb/s over 3G, and the remaining 1.8Mb/s over wifi. 20

wifi through- put [Mb/s] 3G through- put [Mb/s] time [min] MPTCP gives fair throughput. 21 We measured throughput, for both ½ TCP (strawman) and MPTCP, in the office. ½ TCP is unfair to the user, and its throughput is 25% worse than MPTCP.

Design goals Goal 1. Be fair to TCP at bottleneck links Goal 2. Use efficient paths... Goal 3. as much as we can, while being fair to TCP Goal 4. Adapt quickly when congestion changes Goal 5. Don’t oscillate Goal 1. Be fair to TCP at bottleneck links Goal 2. Use efficient paths... Goal 3. as much as we can, while being fair to TCP Goal 4. Adapt quickly when congestion changes Goal 5. Don’t oscillate How does MPTCP achieve all this? 22 redundant

How does TCP congestion control work? Maintain a congestion window w. Increase w for each ACK, by 1/ w Decrease w for each drop, by w/ 2 23

How does MPTCP congestion control work? Maintain a congestion window w r, one window for each path, where r ∊ R ranges over the set of available paths. Increase w r for each ACK on path r, by Decrease w r for each drop on path r, by w r / 2 24

How does MPTCP congestion control work? Maintain a congestion window w r, one window for each path, where r ∊ R ranges over the set of available paths. Increase w r for each ACK on path r, by Decrease w r for each drop on path r, by w r / 2 Design goal 3: At any potential bottleneck S that path r might be in, look at the best that a single-path TCP could get, and compare to what I’m getting. 25

How does MPTCP congestion control work? Maintain a congestion window w r, one window for each path, where r ∊ R ranges over the set of available paths. Increase w r for each ACK on path r, by Decrease w r for each drop on path r, by w r / 2 Design goal 2: We want to shift traffic away from congestion. To achieve this, we increase windows in proportion to their size. 26

MPTCP is a control plane for multipath transport. What problem is it trying to solve? 27

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 2 50Mb/s 4 25Mb/s

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 2 33Mb/s 1 33Mb/s 4 25Mb/s

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 2 25Mb/s 2 25Mb/s 4 25Mb/s The total capacity, 200Mb/s, is shared out evenly between all 8 flows.

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 2 22Mb/s 3 22Mb/s 4 22Mb/s The total capacity, 200Mb/s, is shared out evenly between all 9 flows. It’s as if they were all sharing a single 200Mb/s link. The two links can be said to form a 200Mb/s pool.

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 2 20Mb/s 4 20Mb/s 4 20Mb/s The total capacity, 200Mb/s, is shared out evenly between all 10 flows. It’s as if they were all sharing a single 200Mb/s link. The two links can be said to form a 200Mb/s pool.

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 5 TCPs First 0, then 10 MPTCPs 15 TCPs time [min] throughput per flow [Mb/s] We confirmed in experiments that MPTCP nearly manages to pool the capacity of the two access links. Setup: two 100Mb/s access links, 10ms delay, first 20 flows, then 30.

At a multihomed web server, MPTCP tries to share the ‘pooled access capacity’ fairly Mb/s 5 TCPs First 0, then 10 MPTCPs 15 TCPs MPTCP makes a collection of links behave like a single large pool of capacity — i.e. if the total capacity is C, and there are n flows, each flow gets throughput C / n.

Open question: Can we make a data center behave like a simple ‘capacity pool’? What topologies and path choices would make it behave like a simple capacity pool? What is the pooled capacity? 35

Further questions How much of the Internet can be pooled? What are the implications for network operators? How should we fit multipath congestion control to CompoundTCP or CubicTCP? Is it worth using multipath for small flows? 36

Conclusion Multipath is Packet Switching 2.0 It lets you share capacity between links. MPTCP is TCP 2.0 It is a control plane to harness the flexibility of multipath. It is traffic engineering, done by end-systems, and it works in milliseconds. We formulated design goals and test scenarios for how multipath congestion control should behave. We designed, implemented and evaluated MPTCP, a TCP- like congestion control algorithm that achieves these goals. 37

Related work on multipath congestion control pTCP, CMT over SCTP, and M/TCP that meets goal 1 (fairness at shared bottleneck) mTCP, ≈ R-MTP and goal 2 (choosing efficient paths) Honda et al. (2009), ≈ Tsao and Sivakumar (2009) and goal 5 (non-oscillation) Kelly and Voice (2005), Han et al. (2006) and goal 3 (fairness) and goal 4 (rapid adjustment) (none) 38