Presentation is loading. Please wait.

Presentation is loading. Please wait.

6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring 2016 1.

Similar presentations


Presentation on theme: "6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring 2016 1."— Presentation transcript:

1 6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring 2016 1

2 INTERNET Servers Fabric 100Kbps–100Mbps links ~100ms latency 10–40Gbps links ~10–100μs latency Transport inside the DC

3 INTERNET Servers Fabric web app data- base map- reduce HPC monitoring cache Interconnect for distributed compute workloads Transport inside the DC

4 What’s Different About DC Transport? Network characteristics – Very high link speeds (Gb/s); very low latency (microseconds) Application characteristics – Large-scale distributed computation Challenging traffic patterns – Diverse mix of mice & elephants – Incast Cheap switches – Single-chip shared-memory devices; shallow buffers 4

5 Short messages (e.g., query, coordination) Large flows (e.g., data update, backup) Low Latency High Throughput Data Center Workloads Mice & Elephants

6 TCP timeout Worker 1 Worker 2 Worker 3 Worker 4 Aggregator RTO min = 300 ms Synchronized fan-in congestion Incast  Vasudevan et al. (SIGCOMM’09)

7 Requests are jittered over 10ms window. Jittering switched off around 8:30 am. 7 MLA Query Completion Time (ms) Incast in Bing Jittering trades of median for high percentiles

8 DC Transport Requirements 8 1. Low Latency –Short messages, queries 2. High Throughput –Continuous data updates, backups 3. High Burst Tolerance –Incast The challenge is to achieve these together

9 High Throughput Low Latency Baseline fabric latency (propagation + switching): 10 microseconds

10 High Throughput Low Latency High throughput requires buffering for rate mismatches … but this adds significant queuing latency Baseline fabric latency (propagation + switching): 10 microseconds

11 Data Center TCP

12 TCP in the Data Center TCP [Jacobsen et al.’88] is widely used in the data center – More than 99% of the traffic Operators work around TCP problems ‒Ad-hoc, inefficient, often expensive solutions ‒TCP is deeply ingrained in applications Practical deployment is hard  keep it simple!

13 Review: The TCP Algorithm Sender 1 Sender 2 Receiver ECN = Explicit Congestion Notification Time Window Size (Rate) Additive Increase: W  W+1 per round-trip time Multiplicative Decrease: W  W/2 per drop or ECN mark ECN Mark (1 bit)

14 TCP Buffer Requirement Bandwidth-delay product rule of thumb: – A single flow needs C×RTT buffers for 100% Throughput. Throughput Buffer Size 100% B B ≥ C×RTT B 100% B < C×RTT

15 Window Size (Rate) Buffer Size Throughput 100% Appenzeller et al. (SIGCOMM ‘04): – Large # of flows: is enough. 15 Reducing Buffer Requirements

16 Appenzeller et al. (SIGCOMM ‘04): – Large # of flows: is enough Can’t rely on stat-mux benefit in the DC. – Measurements show typically only 1-2 large flows at each server 16 Key Observation: Low variance in sending rate  Small buffers suffice Reducing Buffer Requirements

17  Extract multi-bit feedback from single-bit stream of ECN marks – Reduce window size based on fraction of marked packets. ECN MarksTCPDCTCP 1 0 1 1 1 Cut window by 50%Cut window by 40% 0 0 0 0 0 0 0 0 0 1Cut window by 50%Cut window by 5% DCTCP: Main Idea Window Size (Bytes) Time (sec) TCP DCTCP

18 DCTCP: Algorithm Switch side: – Mark packets when Queue Length > K. Sender side: – Maintain running average of fraction of packets marked (α).  Adaptive window decreases: – Note: decrease factor between 1 and 2. B K Mark Don’t Mark

19 (KBytes) Experiment: 2 flows (Win 7 stack), Broadcom 1Gbps Switch ECN Marking Thresh = 30KB DCTCP vs TCP Buffer is mostly empty DCTCP mitigates Incast by creating a large buffer headroom

20 1. Low Latency Small buffer occupancies → low queuing delay 2. High Throughput ECN averaging → smooth rate adjustments, low variance 3. High Burst Tolerance Large buffer headroom → bursts fit Aggressive marking → sources react before packets are dropped 21 Why it Works

21 DCTCP Deployments 21

22 Discussion 22

23 What You Said? Austin: “The paper's performance comparison to RED seems arbitrary, perhaps RED had traction at the time? Or just convenient as the switches were capable of implementing it?” 23

24 Implemented in Windows stack. Real hardware, 1Gbps and 10Gbps experiments – 90 server testbed – Broadcom Triumph 48 1G ports – 4MB shared memory – Cisco Cat4948 48 1G ports – 16MB shared memory – Broadcom Scorpion 24 10G ports – 4MB shared memory Numerous micro-benchmarks – Throughput and Queue Length – Multi-hop – Queue Buildup – Buffer Pressure Bing cluster benchmark – Fairness and Convergence – Incast – Static vs Dynamic Buffer Mgmt Evaluation

25 25 Background FlowsQuery Flows Bing Benchmark (baseline)

26 Bing Benchmark (scaled 10x) Query Traffic (Incast bursts) Short messages (Delay-sensitive) Completion Time (ms) Incast Deep buffers fix incast, but increase latency DCTCP good for both incast & latency

27 What You Said Amy: “I find it unsatisfying that the details of many congestion control protocols (such at these) are so complicated!... can we create a parameter-less congestion control protocol that is similar in behavior to DCTCP or TIMELY?” Hongzi: “Is there a general guideline to tune the parameters, like alpha, beta, delta, N, T_low, T_high, in the system?” 27

28 Packets sent in this RTT are marked. How much buffering does DCTCP need for 100% throughput? 22  Need to quantify queue size oscillations (Stability). Time (W*+1)(1-α/2) W* Window Size W*+1 A bit of Analysis B K

29 How small can queues be without loss of throughput? 22  Need to quantify queue size oscillations (Stability). A bit of Analysis B K K > (1/7) C x RTT for TCP: K > C x RTT What assumptions does the model make?

30 What You Said Anurag: “In both the papers, one of the difference I saw from TCP was that these protocols don’t have the “slow start” phase, where the rate grows exponentially starting from 1 packet/RTT.” 30

31 DCTCP takes at most ~40% more RTTs than TCP – “Analysis of DCTCP: Stability, Convergence, and Fairness,” SIGMETRICS 2011 Intuition: DCTCP makes smaller adjustments than TCP, but makes them much more frequently 31 Convergence Time TCP DCTCP

32 TIMELY  Slides by Radhika Mittal (Berkeley)

33 Qualities of RTT Fine-grained and informative Quick response time No switch support needed End-to-end metric Works seamlessly with QoS

34 RTT correlates with queuing delay

35 What You Said Ravi: “The first thing that struck me while reading these papers was how different their approaches were. DCTCP even states that delay-based protocols are "susceptible to noise in the very low latency environment of data centers" and that "the accurate measurement of such small increases in queuing delay is a daunting task". Then, I noticed that there is a 5 year gap between these two papers… “ Arman: “They had to resort to extraordinary measures to ensure that the timestamps accurately reflect the time at which a packet was put on wire…” 35

36 Accurate RTT Measurement

37 Hardware Timestamps – mitigate noise in measurements Hardware Acknowledgements – avoid processing overhead Hardware Assisted RTT Measurement

38 Hardware vs Software Timestamps Kernel Timestamps introduce significant noise in RTT measurements compared to HW Timestamps.

39 Impact of RTT Noise Throughput degrades with increasing noise in RTT. Precise RTT measurement is crucial.

40 TIMELY Framework

41 Overview RTT Measurement Engine Timestamps RTT Rate Computation Engine Pacing Engine Rate Data Paced Data

42 Serialization Delay RECEIVER SENDER Propagation & Queuing Delay RTT = t completion – t send – Serialization Delay HW ack RTT t completion t send RTT Measurement Engine

43 Algorithm Overview Gradient-based Increase / Decrease

44 Algorithm Overview Gradient-based Increase / Decrease Time RTT gradient = 0

45 Algorithm Overview Gradient-based Increase / Decrease Time RTT gradient > 0

46 Algorithm Overview Gradient-based Increase / Decrease Time RTT gradient < 0

47 Algorithm Overview Gradient-based Increase / Decrease Time RTT

48 Algorithm Overview To navigate the throughput-latency tradeoff and ensure stability. Gradient-based Increase / Decrease

49 Why Does Gradient Help Stability? 49 Source Feedback higher order derivatives Observe not only error, but change in error – “anticipate” future state

50 What You Said Arman: “I also think that deducing the queue length from the gradient model could lead to miscalculations. For example, consider an Incast scenario, where many senders transmit simultaneously through the same path. Noting that every packet will see a long, yet steady, RTT, they will compute a near-zero gradient and hence the congestion will continue.” 50

51 Algorithm Overview Additive Increase Multiplicativ e Decrease T high T low To keep tail latency within acceptable limits. Better Burst Tolerance To navigate the throughput-latency tradeoff and ensure stability. Gradient-based Increase / Decrease

52 Discussion 52

53 TIMELY is implemented in the context of RDMA. – RDMA write and read primitives used to invoke NIC services. Priority Flow Control is enabled in the network fabric. – RDMA transport in the NIC is sensitive to packet drops. – PFC sends out pause frames to ensure lossless network. Implementation Set-up

54 “Congestion Spreading” in Lossless Networks 54 PAUSE

55 TIMELY vs PFC 55

56 TIMELY vs PFC 56

57 What You Said Amy: “I was surprised to see that TIMELY performed so much better than DCTCP. Did the lack of an OS-bypass for DCTCP impact performance? I wish that the authors had offered an explanation for this result.” 57

58 Next time: Load Balancing 58

59 59


Download ppt "6.888: Lecture 3 Data Center Congestion Control Mohammad Alizadeh Spring 2016 1."

Similar presentations


Ads by Google