Presentation is loading. Please wait.

Presentation is loading. Please wait.

Did Something Change? Using Statistical Techniques to Interpret Service and Resource Metrics Frank Bereznay.

Similar presentations


Presentation on theme: "Did Something Change? Using Statistical Techniques to Interpret Service and Resource Metrics Frank Bereznay."— Presentation transcript:

1 Did Something Change? Using Statistical Techniques to Interpret Service and Resource Metrics Frank Bereznay

2 Abstract Did Something Change? In a perfect world, one would always know the answer to that question. Unfortunately, nobody works in a perfect world. This paper / presentation will explore statistical techniques used to look for deviations in metrics that are due to assignable causes as opposed to the period to period variation that is normally present. Hypothesis Testing, Statistical Process Control, Multivariate Adaptive Statistical Filtering, and Analysis of Variance will be compared and contrasted. SAS code will be used to perform the analysis. Exploratory analysis techniques will be used to build populations for analysis purposes.

3 Outline  What is Statistics all about?  It’s the population that counts  Repeatable processes  Four techniques to review  Hypothesis Testing  Statistical Process Control  MultiVariate Adaptive Statistical Filtering (MASF)  Analysis of Variance (ANOVA)  Example  Summary & Questions

4 A Note About Bill Mullen

5 What is Statistics All About?  It is the Population that Counts.  Populations have Parameters.  Samples have Statistics.  The Science of Statistics is all about estimating Population Parameters by taking Samples and calculating Statistics.

6 What is Statistics All About?  What is your population?  It can be anything you want it to be, but  It must have well defined boundaries.  A production cycle of a manufacturing process.  A work shift.  A bottling run for a particular wine vintage.  It must be randomly sampled.

7 Hypothesis Testing  Standard topic for first year Stat Class.  Simple and easy to do.  Interpretation of results has been misunderstood.

8 Hypothesis Testing  Start with a statement you wish to contradict or disprove. Typically this is the status quo. It becomes the null hypothesis.  The average message rate is 15 per minute.  Create an alternative hypothesis that contradicts the null hypothesis.  The average message rate is not 15 per minute.

9 Hypothesis Testing  Standard Notation for stating problem.

10 Hypothesis Testing  What is the population we are working with here?  It is a 24 hour period.  We must randomly sample across the entire period.  We randomly collect message rates at 10 different points in time.  13,14,16,11,16,15,12,16,12,14

11 Hypothesis Testing – Population Parameters

12 Hypothesis Testing – Population Distribution

13 Hypothesis Testing – Sample Statistics

14 Hypothesis Testing Calculation of the t statistic with 9 (N-1) Degrees of Freedom

15 Hypothesis Testing

16  So, What does this tell us?  The official statement is:  At a 95% confidence level, the data is insufficient for us to state the mean of the population is not 15 for the 24 hour period being examined.  Important point, the contrary is not necessarily true.  This does not prove in any way the population mean is 15.

17 Hypothesis Testing  Statistical Assumptions that need to be considered.  Underlying population does not need to be normally distributed.  The population must be randomly sampled.

18 Hypothesis Testing  Some practical uses.  Validating we have met an SLA.  Looking to see if something is not what we expect it to be.  Key point  This technique combines an a priori expectation about a quality metric with sampled data.  You need to know your data and choose wisely.

19 Statistical Process Control  Two Legends standout in this area:  Walter Shewhart  W. Edwards Deming  SPC is conceptually similar to Hypothesis Testing, but computationally different.  No a priori data point is needed.  Data is sub-grouped for calculation purposes.  SPC and Hypothesis Testing can produce different results for the same set of data.

20 Statistical Process Control Sample Order Output

21 Statistical Process Control

22 Statistical Process Control Done the correct way: 15 ±.373*4.33

23 Statistical Process Control Without Sub-grouping limit calculation

24

25 Statistical Process Control  Statistical Assumptions that need to be considered.  The data does not need to be normally distributed.  Proper sub grouping of the data is fundamental to the technique.  Sampling plan must be random and cover the boundaries of the population being examined.

26 Statistical Process Control  Practical Uses  Useful for measuring discrete physical objects.  Things that have physical properties.  Counts for outputs.  Dollar volumes for orders / sales.  Not appropriate for interval based instrumentation data we frequently use.

27 Multivariate Adaptive Statistical Filtering (MASF)  Developed by Annie Shum and Jeff Buzen.  Subject of 1995 CMG Paper by same name.  Practitioner’s approach to create a statistical detection technique which addresses the unique challenges of the interval driven time series datasets used by Computer Resource Management Professionals.

28 Why MASF?  Variance based statistical detection techniques are based on repeatable processes.  Filling a bottle with wine.  Manufacturing a roll of paper.  Commercial computer workloads are generally not repeatable processes (and that is an understatement!).

29 MASF  A two step process is established:  A Reference Set is created during a period of normal operation in place of a random sample.  The Reference Set is used as a set of criteria to examine data from subsequent periods.

30 MASF – Reference Set  What is a normal period?  Workloads vary by time of day, day of week and month of year.

31 MASF – Aggregation Policies  The collected data can / should be grouped into set of hours with same characteristics.  Increases number of samples per collection period.

32 MASF – Aggregation Policies  Response Time Example.

33 MASF – Detection Limits MondayTuesday thru Thursday Friday

34 MASF - Summary  Very robust statistical detection technique.  Addresses random sampling issues.  Addresses volatility in commercial computing workloads.  More of a framework than a specific procedure.  Reference set is user defined.  Measurement methodology is user defined.

35 MASF Summary  Measurement framework is intended to be an N period rolling average.  Ideally 10 to 20 points per reference set.  Longer term datasets subject to Time Series influences which distorts metrics.  This technique should be included in every Resource Management Specialist’s toolkit!

36 Analysis of Variance (ANOVA)  A comparison of parameters across populations.  Best explained by why it was developed.  Agricultural work in the late 1800’s to improve crop yields.  Plot of land was divided into multiple areas and subjected to different treatments.  Test was developed to compare the effects of these different treatments on crop yield.

37 ANOVA  Example of how this type of experiment would be setup:  Important Point - We are dealing with six separate populations.

38 ANOVA  Same ground rules as Hypothesis Testing  Start of by assuming all population means are equal.  Null Hypothesis  Attempt to prove they are not all the same.  Alternative Hypothesis  However, calculation of the result is very laborious and best done by a computer.

39 ANOVA  Test stated in similar fashion to Hypothesis Testing.

40 ANOVA  Accepting Null Hypothesis has same meaning as Hypothesis Testing.  Can’t prove any mean is different – end of test.  Accepting Alternative Hypothesis has an interesting twist.  One or more of the means are different – but which one(s) is/are different?

41 ANOVA  Tukey test answers the Alternative Hypothesis question.  John Tukey developed a technique to group means of an ANOVA test when the Alternative Hypothesis is accepted.  We now have a way to take a set of multiple data populations and segment them into like groups.

42 ANOVA  SMF Data Volume Example

43 ANOVA  SAS Proc ANOVA Procedure Proc ANOVA; Class Day; Model Count = Day; Means Day / Tukey; Run;

44 ANOVA  Key Results from Test  Pr > F =.0424  We conclude at a 95% confidence level that one or more of the means are different.  Tukey Test  Monday and Friday are different  All other days are the same  A certain degree of ambiguity

45 ANOVA  Typical way to report or display results of Tukey test. Mon Tue WedThur Fri | | | |

46 ANOVA  Second test.  Compare the day of the week across weeks Proc ANOVA; Format Date Date8.; Class Date; Model Count = Date; By Day; Run;

47 ANOVA  Results from second test.  All five tests accepted the null hypothesis.  Pr > F were all in the high 90% range.  So the ‘official’ statement is’  The data is insufficient to conclude there is any difference in the mean value for a day of the week across weeks.

48 ANOVA  Statistical Assumptions that need to be considered.  Sufficient data is needed to obtain 6 to 10 observations for each treatment.  Need to be sensitive to correlated data.  Sampling plan must be random and cover the boundaries of the population being examined.

49 ANOVA  Practical Uses  Comparing data from multiple days to see if it is the same or different.  Use it as a clustering technique to build aggregated data groups for a MASF analysis.  Multiple factor ANOVAs can look at multiple treatments (factors) at the same time.  Day of week and hour of day.  A very powerful tool that should be in everybody’s toolkit!

50 Midrange Server Example  One Month of Prime Shift usage data for an OLTP server.  The MASF technique will be used to look for deviations.  The first three weeks will be used to be the reference set to examine the fourth weeks data.  ANOVA will be used to create Aggregation Policies to cluster the hourly data.

51 Midrange Server Example Table of Hourly Usage Metrics ReferenceSetReferenceSet

52 Midrange Server Example  ANOVA test was performed on the hours of the day.  Two overlapping groups were identified. CPUAVE Hour | | | |

53 Midrange Server Example  A second ANOVA test was performed on the day of the week.  Identified two non-overlapping groups.  Group 1  Monday and Friday.  Group 2  Tuesday, Wednesday and Thursday.

54 Midrange Server Example  The following aggregation policy was built for this workload.

55 Midrange Server Example  The aggregation policy was used to build the following reference set from the table of hour usage metrics:

56 Midrange Server Example  Plotting this along with the actual data from the fourth week produced the following control chart for Monday:

57 Midrange Server Example Exception Table for Rest of Week

58 Summary  So, What is in your toolkit?  Pick up these tools at your nearest CMG meeting. They do take some getting used to, but are worth the learning curve.  Hypothesis Testing, Statistical Process Control, MASF and ANOVA  Be very wary of your data.  The Time Series Data we routinely work with is a very complicated multi-dimensional dataset.  Get to know you data. The better you know the data, the better you know your workload.

59 Summary  Next Step - Recommended Reading  I. Trubin’s CMG papers on application of MASF and variance based statistical detection techniques.  2001 – Exception Detection System, Based on Statistical Process Control Concept.  2002 – Global and Application Levels Exception Detection System, Based on MASF Technique  2003 – Disk Subsystem Capacity Management, Based on Business Drivers, I/O Performance Metrics and MASF  2004 – Mainframe Global and Workload Levels Statistical Exception Detection System, Based on MASF  2005 – Capturing Workload Pathology by Statistical Exception Detection System.

60 Questions ???


Download ppt "Did Something Change? Using Statistical Techniques to Interpret Service and Resource Metrics Frank Bereznay."

Similar presentations


Ads by Google