Bertha & M Sadeeq.  Easy to manage the problems  Scalability  Real time and real environment  Free data collection  Cost efficient  DCTCP only covers.

Slides:



Advertisements
Similar presentations
Balaji Prabhakar Active queue management and bandwidth partitioning algorithms Balaji Prabhakar Departments of EE and CS Stanford University
Advertisements

The Transmission Control Protocol (TCP) carries most Internet traffic, so performance of the Internet depends to a great extent on how well TCP works.
Martin Suchara, Ryan Witt, Bartek Wydrowski California Institute of Technology Pasadena, U.S.A. TCP MaxNet Implementation and Experiments on the WAN in.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research.
B 黃冠智.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Modified by Feng.
Lecture 18: Congestion Control in Data Center Networks 1.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Presented by Shaddi.
Fixing TCP in Datacenters Costin Raiciu Advanced Topics in Distributed Systems 2011.
Improving Datacenter Performance and Robustness with Multipath TCP Costin Raiciu, Sebastien Barre, Christopher Pluntke, Adam Greenhalgh, Damon Wischik,
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli University of Calif, Berkeley and Lawrence Berkeley National Laboratory SIGCOMM.
Congestion Control: TCP & DC-TCP Swarun Kumar With Slides From: Prof. Katabi, Alizadeh et al.
Balajee Vamanan et al. Deadline-Aware Datacenter TCP (D 2 TCP) Balajee Vamanan, Jahangir Hasan, and T. N. Vijaykumar.
1 Updates on Backward Congestion Notification Davide Bergamasco Cisco Systems, Inc. IEEE 802 Plenary Meeting San Francisco, USA July.
Mohammad Alizadeh Adel Javanmard and Balaji Prabhakar Stanford University Analysis of DCTCP:Analysis of DCTCP: Stability, Convergence, and FairnessStability,
Advanced Computer Networking Congestion Control for High Bandwidth-Delay Product Environments (XCP Algorithm) 1.
Congestion Control An Overview -Jyothi Guntaka. Congestion  What is congestion ?  The aggregate demand for network resources exceeds the available capacity.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
Congestion control in data centers
Defense: Christopher Francis, Rumou duan Data Center TCP (DCTCP) 1.
High speed TCP’s. Why high-speed TCP? Suppose that the bottleneck bandwidth is 10Gbps and RTT = 200ms. Bandwidth delay product is packets (1500.
1 Emulating AQM from End Hosts Presenters: Syed Zaidi Ivor Rodrigues.
Random Early Detection Gateways for Congestion Avoidance
Congestion Control for High Bandwidth-Delay Product Environments Dina Katabi Mark Handley Charlie Rohrs.
Ch. 28 Q and A IS 333 Spring Q1 Q: What is network latency? 1.Changes in delay and duration of the changes 2.time required to transfer data across.
Mohammad Alizadeh, Abdul Kabbani, Tom Edsall,
Pipelined Two Step Iterative Matching Algorithms for CIOQ Crossbar Switches Deng Pan and Yuanyuan Yang State University of New York, Stony Brook.
Practical TDMA for Datacenter Ethernet
Mohammad Alizadeh Stanford University Joint with: Abdul Kabbani, Tom Edsall, Balaji Prabhakar, Amin Vahdat, Masato Yasuda HULL: High bandwidth, Ultra Low-Latency.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan by liyong Data.
TCP & Data Center Networking
Curbing Delays in Datacenters: Need Time to Save Time? Mohammad Alizadeh Sachin Katti, Balaji Prabhakar Insieme Networks Stanford University 1.
TCP Friendly Rate Control (TFRC): Protocol Specification RFC3448bis draft-ietf-dccp-rfc3448bis-02.txt S. Floyd, M. Handley, J. Padhye, and J. Widmer Testing.
TFRC: TCP Friendly Rate Control using TCP Equation Based Congestion Model CS 218 W 2003 Oct 29, 2003.
Understanding the Performance of TCP Pacing Amit Aggarwal, Stefan Savage, Thomas Anderson Department of Computer Science and Engineering University of.
1 Lecture 14 High-speed TCP connections Wraparound Keeping the pipeline full Estimating RTT Fairness of TCP congestion control Internet resource allocation.
Congestion Control - Supplementary Slides are adapted on Jean Walrand’s Slides.
Korea Advanced Institute of Science and Technology Network Systems Lab. 1 Dual-resource TCP/AQM for processing-constrained networks INFOCOM 2006, Barcelona,
DCTCP & CoDel the Best is the Friend of the Good Bob Briscoe, BT Murari Sridharan, Microsoft IETF-84 TSVAREA Jul 2012 Le mieux est l'ennemi du bien Voltaire.
Wei Bai with Li Chen, Kai Chen, Dongsu Han, Chen Tian, Hao Wang SING HKUST Information-Agnostic Flow Scheduling for Commodity Data Centers 1 SJTU,
Murari Sridharan and Kun Tan (Collaborators: Jingmin Song, MSRA & Qian Zhang, HKUST.
Thoughts on the Evolution of TCP in the Internet (version 2) Sally Floyd ICIR Wednesday Lunch March 17,
1 Analysis of a window-based flow control mechanism based on TCP Vegas in heterogeneous network environment Hiroyuki Ohsaki Cybermedia Center, Osaka University,
H. OhsakiITCom A control theoretical analysis of a window-based flow control mechanism for TCP connections with different propagation delays Hiroyuki.
TCP continued. Discussion – TCP Throughput TCP will most likely generate the saw tooth type of traffic. – A rough estimate is that the congestion window.
XCP: eXplicit Control Protocol Dina Katabi MIT Lab for Computer Science
TCP Traffic Characteristics—Deep buffer Switch
C-Through: Part-time Optics in Data centers Aditi Bose, Sarah Alsulaiman.
TCP Friendly Rate Control (TFRC): Protocol Specification RFC3448bis draft-ietf-dccp-rfc3448bis-03.txt S. Floyd, M. Handley, J. Padhye, and J. Widmer Testing.
1 Sheer volume and dynamic nature of video stresses network resources PIE: A lightweight latency control to address the buffer problem issue Rong Pan,
Achievable Service Differentiation with Token Bucket Marking for TCP S. Sahu, D.Towsley University of Massachusetts P. Nain INRIA C. Diot Sprint Labs V.
Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced.
6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring
1 Three ways to (ab)use Multipath Congestion Control Costin Raiciu University Politehnica of Bucharest.
Mohammad Alizadeh, Albert Greenberg, David A. Maltz, Jitendra Padhye Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, Murari Sridharan Microsoft Research.
Scalable Congestion Control Protocol based on SDN in Data Center Networks Speaker : Bo-Han Hua Professor : Dr. Kai-Wei Ke Date : 2016/04/08 1.
Data Center TCP (DCTCP)
Data Center TCP (DCTCP)
HyGenICC: Hypervisor-based Generic IP Congestion Control for Virtualized Data Centers Conference Paper in Proceedings of ICC16 By Ahmed M. Abdelmoniem,
Router-Assisted Congestion Control
Improving Datacenter Performance and Robustness with Multipath TCP
TCP Westwood(+) Protocol Implementation in ns-3
Microsoft Research Stanford University
Internet Congestion Control Research Group
AMP: A Better Multipath TCP for Data Center Networks
Data Center TCP (DCTCP)
SICC: SDN-Based Incast Congestion Control For Data Centers Ahmed M
Lecture 16, Computer Networks (198:552)
Network Performance Definitions
Transport Layer: Congestion Control
Presentation transcript:

Bertha & M Sadeeq

 Easy to manage the problems  Scalability  Real time and real environment  Free data collection  Cost efficient  DCTCP only covers ◦ High burst tolerance ◦ Low latency ◦ High performance

 They worked on commodity switches (Shallow buffered) with 6000 servers while the data speed up-to 10Gbps.  Low cost switches at the top of the rack providing 48 ports at 1 Gbps.  They find DCTCP deliver the same or better throughput then TCP.

 Unlike TCP, DTCP provide high burst tolerance and low latency for short flow.  Many applications find it difficult to meet the deadlines using state-of-the-art TCP, our applications carefully controls the amount of data each worker send and adds jitter.  They only measure and analyze compressed data(>150TB).

 Lets assume their system is OK… They are working in homogenous, single controlled environment but they did not discuss ◦ Backward compatibility ◦ Incremental deployment

 F -Fraction of packets marked  α -Senders record of marked packets  g – might be the weight on the basis of past Pkt marked 0<g<1 Totally RED marking idea already used in TCP and they accepted the fact by implementing it like….. When α=1 DCTCP cuts window in half, just like TCP. So….???

 They assuming N flows are synchronized. For this equation…. -Using the idea of cut window from TCP (for W (t)). -Single bottle neck capacity (C). -Identical round trip time (RTT). So I think your problem is solved…huh!

 Based on workload inside a homogenous data center and does not take into account variable receiver/sender buffers or the co-existence of other protocols.  Is existing hardware able to reproduce these results? ◦ Explicitly mentions a Broadcom Triumph, Broadcom Scorpion, and Cisco CAT4948 switch, but what is the real availability of a switch to support ECN?

 ECN was mentioned but no measurements were performed contrasting DCTCP with plain TCP with ECN.  All comparisons are made to a TCP New Reno (w/SACK) implementation. ◦ Is this representative of what most DC’s currently use?  Single bottle-neck used for evaluation

 “DCTCP trades off convergence time” in order to achieve its goals (sec 3.5)  DCTCP converges quickly (sec 4.1)  Don't you think it is conflicting with the goal?

 The paper talks about separating DCTCP flows from TCP flows to improve performance, but it is not clear how this may be done when external traffic is included.

 The analysis previous section is for idealized DCTCP source, and does not capture any of the burstiness inherent to actual implementation of window-based congestion control protocols in the network stack.  Even in performance when k>65 then DCTCP=TCP, so?

 Graphs not on the same page they were referenced.  Estimation gain, g : ◦ Sec 3.1 – “the weight given to new samples against the past in the estimation of” alpha ◦ Two pgs later, sec randomly labeled “estimation gain”  This paper suggested no future work…

 Has only been cited 9 times…  One of which is their follow up to this paper, analyzing DCTCP ◦ Conclusion: “Our analysis shows that DCTCP can achieve very high throughput while maintaining low buffer occupancies.” …Sound familiar?