Presentation is loading. Please wait.

Presentation is loading. Please wait.

Profiling Network Performance in Multi-tier Datacenter Applications Jori Hardman Carly Ho Paper by Minlan Yu, Albert Greenberg, Dave Maltz, Jennifer Rexford,

Similar presentations


Presentation on theme: "Profiling Network Performance in Multi-tier Datacenter Applications Jori Hardman Carly Ho Paper by Minlan Yu, Albert Greenberg, Dave Maltz, Jennifer Rexford,"— Presentation transcript:

1 Profiling Network Performance in Multi-tier Datacenter Applications Jori Hardman Carly Ho Paper by Minlan Yu, Albert Greenberg, Dave Maltz, Jennifer Rexford, Lihua Yuan, Srikanth Kandula, Changhoon Kim

2 Applications inside Data Centers Front end Server Aggregator … Aggregator Worker … …

3 Challenges of Datacenter Diagnosis Multi-tier applications o Tens of hundreds of application components o Tens of thousands of servers Evolving applications o Add new features, fix bugs o Change components while app is still in operation Human factors o Developers may not understand network well o Nagle’s algorithm, delayed ACK, etc.

4 Where are the Performance Problems? Network or application? o App team: Why low throughput, high delay? o Net team: No equipment failure or congestion Network and application! -- their interactions o Network stack is not configured correctly o Small application writes delayed by TCP o TCP incast: synchronized writes cause packet loss A diagnosis tool to understand network-application interactions

5 Diagnosis in Today’s Data Center Host App OS Packet sniffer App logs: #Reqs/sec Response time 1% req. >200ms delay Switch logs: #bytes/pkts per minute Packet trace: Filter out trace for long delay req. SNAP: Diagnose net-app interactions Application-specific Too expensive Too coarse-grained Generic, fine-grained, and lightweight

6 Full Knowledge of Data Centers Direct access to network stack o Directly measure rather than relying on inference o E.g., # of fast retransmission packets Application-server mapping o Know which application runs on which servers o E.g., which app to blame for sending a lot of traffic Network topology and routing o Know which application uses which resource o E.g., which app is affected if a link is congested

7 SNAP: Scalable Net-App Profiler

8 Outline SNAP architecture o Passively measure real-time network stack info o Systematically identify performance problems o Correlate across connections to pinpoint problems SNAP deployment o Operators: Characterize performance problems o Developers: Identify problems for applications SNAP validation and overhead

9 SNAP Architecture Step 1: Network-stack measurements

10 What Data to Collect? Goals: o Fine-grained: in milliseconds or seconds o Low overhead: low CPU overhead and data volume o Generic across applications Two types of data: o Poll TCP statistics  Network performance o Event-driven socket logging  App expectation o Both exist in today’s linux and windows systems

11 TCP statistics Instantaneous snapshots o #Bytes in the send buffer o Congestion window size, receiver window size o Snapshots based on Poisson sampling Cumulative counters o #FastRetrans, #Timeout o RTT estimation: #SampleRTT, #SumRTT o RwinLimitTime o Calculate difference between two polls

12 SNAP Architecture Step 2: Performance problem classification

13 Life of Data Transfer Application generates the data Copy data to send buffer TCP sends data to the network Receiver receives the data and ACK Sender App Send Buffer Receiver Network

14 Classifying Socket Performance Bottlenecked by CPU, disk, etc. Slow due to app design (small writes) Send buffer not large enough Fast retransmission Timeout Not reading fast enough (CPU, disk, etc.) Not ACKing fast enough (Delayed ACK) Sender App Send Buffer Receiver Network

15 Identifying Performance Problems Not any other problems Send buffer is almost full #Fast retransmission #Timeout RwinLimitTime Delayed ACK diff(SumRTT) > diff(SampleRTT)*MaxQueuingDelay Sender App Send Buffer Receiver Network Direct measure Sampling Inference

16 SNAP Architecture Step 3: Correlation across connections

17 Pinpoint Problems via Correlation Correlation over shared switch/link/host o Packet loss for all the connections going through one switch/host o Pinpoint the problematic switch

18 Pinpoint Problems via Correlation Correlation over application o Same application has problem on all machines o Report aggregated application behavior

19 SNAP Architecture

20 SNAP Deployment Production data center o 8K machines, 700 applications o Ran SNAP for a week, collected petabytes of data Operators: Profiling the whole data center o Characterize the sources of performance problems o Key problems in the data center Developers: Profiling individual applications o Pinpoint problems in app software, network stack, and their interactions

21 Performance Problem Overview A small number of apps suffer from significant performance problems

22 SNAP diagnosis SNAP diagnosis steps: o Correlate connection performance to pinpoint applications with problems o Expose socket and TCP stats o Find out root cause with operators and developers o Propose potential solutions Sender App Send Buffer Receiver Network

23 Classifying Socket Performance Bottlenecked by CPU, disk, etc. Slow due to app design (small writes) Send buffer not large enough Fast retransmission Timeout Not reading fast enough (CPU, disk, etc.) Not ACKing fast enough (Delayed ACK) Sender App Send Buffer Receiver Network

24 Send Buffer and Recv Window Problems on a single connection App process … Write Bytes TCP Send Buffer App process … Read Bytes TCP Recv Buffer Some apps use default 8KB Fixed max size 64KB not enough for some apps

25 Need Buffer Autotuning Problems of sharing buffer at a single host o More send buffer problems on machines with more connections o How to set buffer size cooperatively? Auto-tuning send buffer and recv window o Dynamically allocate buffer across applications o Based on congestion window of each app o Tune send buffer and recv window together

26 Classifying Socket Performance Bottlenecked by CPU, disk, etc. Slow due to app design (small writes) Send buffer not large enough Fast retransmission Timeout Not reading fast enough (CPU, disk, etc.) Not ACKing fast enough (Delayed ACK) Sender App Send Buffer Receiver Network

27 Packet Loss in a Day in the Datacenter Packet loss burst every hour 2-4 am is the backup time

28 Spread Writes over Multiple Connections SNAP diagnosis: o More timeouts than fast retransmission o Small packet sending rate Root cause: o Two connections to avoid head-of-line blocking o Low-rate small requests gets more timeouts Solution: o Use one connection; Assign ID to each request o Combine data to reduce timeouts Req Respons e

29 Congestion Window Allows Sudden Bursts SNAP diagnosis: o Significant packet loss o Congestion window is too large after an idle period Root cause: o Slow start restart is disabled

30 Slow Start Restart Slow start restart o Reduce congestion window size if the connection is idle to prevent sudden burst t Window Drops after an idle time

31 Slow Start Restart However, developers disabled it because: o Intentionally increase congestion window over a persistent connection to reduce delay o E.g., if congestion window is large, it just takes 1 RTT to send 64 KB data Potential solution: o New congestion control for delay sensitive traffic

32 Classifying Socket Performance Bottlenecked by CPU, disk, etc. Slow due to app design (small writes) Send buffer not large enough Fast retransmission Timeout Not reading fast enough (CPU, disk, etc.) Not ACKing fast enough (Delayed ACK) Sender App Send Buffer Receiver Network

33 Timeout and Delayed ACK SNAP diagnosis o Congestion window drops to one after a timeout o Followed by a delayed ACK Solution: o Congestion window drops to two

34 200ms ACK Delay W1: write() less than MSS W2: write() less than MSS Nagle and Delayed ACK TCP/IPAppNetwork TCP segment with W1 TCP segment with W2 ACK for W1 TCP/IP App read() W1 read() W2 SNAP diagnosis o Delayed ACK and small writes

35 ReceiverSocket send buffer Send Buffer and Delayed ACK Application buffer Application 1. Send complete Network Stack 2. ACK With Send Buffer Receiver Application buffer Application 2. Send complete Network Stack 1. ACK Set Send Buffer to zero SNAP diagnosis: Delayed ACK and send buffer = 0

36 SNAP Validation and Overhead

37 Correlation Accuracy Inject two real problems Mix labeled data with real production data Correlation over shared machine Successfully identified those labled machines 2.7% of machines have ACC > 0.4

38 SNAP Overhead Data volume o Socket logs: 20 Bytes per socket o TCP statistics: 120 Bytes per connection per poll CPU overhead o Log socket calls: event-driven, < 5% o Read TCP table o Poll TCP statistics

39 Reducing CPU Overhead CPU overhead o Polling TCP statistics and reading TCP table o Increase with number of connections and polling freq. o E.g., 35% for polling 5K connections with 50 ms interval 5% for polling 1K connections with 500 ms interval Adaptive tuning of polling frequency o Reduce polling frequency to stay within a target CPU o Devote more polling to more problematic connections

40 Conclusion A simple, efficient way to profile data centers o Passively measure real-time network stack information o Systematically identify components with problems o Correlate problems across connections Deploying SNAP in production data center o Characterize data center performance problems  Help operators improve platform and tune network o Discover app-net interactions  Help developers to pinpoint app problems


Download ppt "Profiling Network Performance in Multi-tier Datacenter Applications Jori Hardman Carly Ho Paper by Minlan Yu, Albert Greenberg, Dave Maltz, Jennifer Rexford,"

Similar presentations


Ads by Google