Download presentation

Presentation is loading. Please wait.

Published byCody Wiley Modified over 4 years ago

1
11-2-2014 Challenge the future Delft University of Technology Analysis and Modeling of Time-Correlated Failures in Large-Scale Distributed Systems Nezih Yigitbasi 1, Matthieu Gallet 2, Derrick Kondo 3, Alexandru Iosup 1, Dick Epema 1 1 TUDelft, 2 École Normale Supérieure de Lyon, 3 INRIA The Failure Trace Archive http://guardg.st.ewi.tudelft.nl/

2
2 Failures Do Happen … Build a computing system with 10 thousand servers with MTBF of 30 years each, watch one fail per day … Jeff Dean, Google Fellow, LADIS09 Keynote … Average worker deaths per MapReduce job is 1.2 … MapReduce, OSDI04 … 20-45% failures in TeraGrid … Khalili et al., GRID06 … During the month of March 2005 on one dedicated cluster with 1500 Xeon CPUs, there were 32,580 Sawzall jobs launched, using an average of 220 machines each. While running those jobs, 18,636 failures occurred (application failure, network outage, system crash, etc.) that triggered rerunning some portion of the job... Rob Pike et al., Google

3
3 Common assumption Is this realistic for large-scale distributed systems? Already know that space correlations exist Time correlations may impact Proactive fault-tolerance solutions Design decisions Checkpointing & scheduling decisions (e.g., migrate computation at the beginning of a predicted peak) Are Failures Independent? M.Gallet, N.Yigitbasi, B.Javadi, D.Kondo, A.Iosup, D.Epema, A Model for Space-correlated Failures in Large-scale Distributed Systems, Euro-Par 2010.

4
4 GOAL 1 Investigate whether failures have time correlations GOAL 2 Model the time-varying behavior of failures (peaks) Our Goals

5
5 Outline Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions

6
6 Why Not Root-Cause Analysis? Root-cause analysis is definitely useful Challenges Systems are large and complex Not all subsystems provide detailed info Little monitoring/debugging support Environment-specific or temporary failures Huge size of failure data 19 systems

7
7 Failure Trace Archive (FTA) http://fta.inria.fr Provides Availability traces of diverse distributed systems of different scale Standard format for failure events Tools for parsing & analysis Enables Comparing models/algorithms using identical data sets Evaluation of the generality/specificity of models/algorithms across different types of systems Analysis of availability evolution across time scales And many more … The Failure Trace Archive

8
8 FTA Schema Hierarchical trace format Resource centric Event-based Associated metadata Codes for different components and events Available in raw, tabbed and MYSQL formats

9
9 Sample Trace Identifiers for the event/component/node/platformNode nameType of event: unavailability/availabilityEvent start/stop time (UNIX time)

10
10 Outline Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions

11
11 Our Approach (1): Outline Traces Nineteen failure traces from the FTA Mostly production systems Analysis Use the auto-correlation of failure rate time series Modeling Fit well-known probability distributions to the failure data to model failure peaks

12
12 Our Approach (2): Traces 100K+ hosts ~1.2 M failure events 15+ years of operation in total http://fta.inria.fr

13
13 Our Approach (3): Analysis Auto-Correlation Function (ACF) Similarity between observations as a function of the time lag between them Mathematical tool for finding repeating patterns Used for assessing time correlations [-1 1]: weak strong correlation

14
14 Our Approach (4): Modeling We use five probability distributions to fit to the empirical data Exponential, Weibull, Pareto, Log-Normal, and Gamma Maximum likelihood estimation + Goodness of Fit Tests

15
15 Outline Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions

16
16 WEBSITES Analysis (1): Auto-correlation Many systems exhibit moderate/strong auto-correlation for moderate/short time lags (GRID5K, LDNS, SKYPE, …)

17
17 TERAGRID Small number of systems exhibit low auto- correlation (TeraGrid, PNNL, NOTRE-DAME) Analysis (2): Auto-correlation

18
18 Daily/Weekly Cycles Analysis (3): Failure Patterns Daily/Weekly Cycles MICROSOFT SKYPE Systems with similar usage patterns have similar failure patterns

19
19 GRID5000 Analysis (4): Workload Intensity vs Failure Rate There is a strong correlation between the workload intensity and the failure rate in some systems

20
20 Outline Background Our Approach Analysis of Time-Correlation Modeling the Peaks of Failures Conclusions

21
21 Failure Peaks (1): Model μ+kσ μ 1 2 3 4

22
22 Failure Peaks (2): Identification Our goal Balance between capturing the extreme system behavior and characterizing an important part of the system failures We use a threshold to isolate peaks μ + kσ where k is a positive integer Large k=> Few periods explaining only a small fraction of failures Small k=> More failures of probably very different characteristics We use k=1 Tried k={0.5, 0.9, 1.0, 1.1, 1.25, 1.5, 2.0} Over all traces, average fraction of downtime and average number of failures are close (see Technical Report)

23
23 Failure Peaks (3): Modeling Results (1) 1.On average, 50% - 95% of the system downtime is caused by the failures that originate during peaks, but the fraction of peaks < 10% for all platforms 2.The average peak durations are on the order of 1- 2 hours 3.The average time between peaks is on the order of 15-80 hours 4.Average IAT over the entire trace is about 9x the IAT during peaks

24
24 Failure Peaks (4): Modeling Results (2) 5.Exponential distribution is not a good fit for IAT during peaks, time between peaks, and failure duration during peaks Traditional models are not enough 6.Model parameters do not follow a heavy-tailed distribution Goodness of fit test results (p-values) for the Pareto distribution are very low 7.Weibull and the Log-Normal provide the best fit See the paper for the parameters

25
25 Conclusions (1) Nineteen traces most of which are production systems 100K+ hosts – ~1.2 M failure events – 15+ years of operation Four new traces available in the FTA (3 CONDOR + 1 TERAGRID) Large-Scale Study GOAL 1: Analysis Failures exhibit strong periodic behavior & time correlation Systems with similar usage patterns have similar failure patterns Strong correlation between workload intensity and failure rate

26
26 Conclusions (2) GOAL 2: Modeling Peak duration, time between peaks, the failure IAT during peaks, and the failure duration during peaks On average 50% - 95% of the system downtime is caused by the failures that originate during peaks (fraction of peaks < 10%) Weibull & the Log-Normal distributions provide good fit

27
27 M.N.Yigitbasi@tudelft.nl http://www.st.ewi.tudelft.nl/~nezih/ M.N.Yigitbasi@tudelft.nl http://www.st.ewi.tudelft.nl/~nezih/ More Information: Guard-g Project: http://guardg.st.ewi.tudelft.nl/http://guardg.st.ewi.tudelft.nl/ The Failure Trace Archive: http://fta.inria.frhttp://fta.inria.fr PDS publication database: http://www.pds.twi.tudelft.nlhttp://www.pds.twi.tudelft.nl Thank you! Questions? Comments? The Failure Trace Archive

28
28 X X X

29
29 +1 0 lag k 0 100 Autocorrelation Coefficient Significant positive correlation at short lags Autocorrelation Function

30
30 +1 0 lag k 0 100 Autocorrelation Coefficient No statistically significant correlation beyond this lag Autocorrelation Function

31
31 For most processes (e.g., Poisson, or compound Poisson), the autocorrelation function drops to zero very quickly usually immediately or exponentially fast For self-similar processes, the autocorrelation function drops very slowly i.e., hyperbolically, toward zero, but may never reach zero Long-range Dependence

32
32 +1 0 lag k 0 100 Autocorrelation Coefficient Typical long-range dependent process Typical short-range dependent process Autocorrelation Function

Similar presentations

Presentation is loading. Please wait....

OK

Essential Cell Biology

Essential Cell Biology

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google