Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Review of Probability and Statistics

Similar presentations


Presentation on theme: "A Review of Probability and Statistics"— Presentation transcript:

1 A Review of Probability and Statistics
Descriptive statistics Probability Random variables Sampling distributions Estimation and confidence intervals Test of Hypothesis For mean, variances, and proportions Goodness of fit

2 Key Concepts Finite Infinite Population -- "parameters"
Sample -- "statistics" Random samples - Your MOST important decision!

3 Data Deterministic vs. Probabilistic (Stochastic)
Discrete or Continuous: Whether a variable is continuous (measured) or discrete (counted) is a property of the data, not of the measuring device: weight is a continuous variable, even if your scale can only measure values to the pound. Data description: Category frequency Category relative frequency

4 Nominal -- I E = 1 ; EE = 2 ; CE = 3
Data Types Qualitative (Categorical) Nominal -- I E = 1 ; EE = 2 ; CE = 3 Ordinal -- poor = 1 ; fair = 2 ; good = 3 ; excellent = 4 Quantitative (Numerical) Interval -- temperature, viscosity Ratio -- weight, height The type of statistics you can calculate depends on the data type. Average, median, and variance make no sense if the data is categorical (proportions do).

5 Data Presentation for Qualitative Data
Rules: Each observation MUST fall in one and only one category. All observations must be accounted for. Table -- Provides greater detail Bar graphs -- Consider Pareto presentation! Pie charts (do not need to be round)

6 Data Presentation for Quantitative Data
Consider a Stem-and-Leaf Display Use 5 to 20 classes (intervals, groups). Cell width, boundaries, limits, and midpoint Histograms Discrete Continuous (frequency polygon - plot at class mark) Cumulative frequency distribution (Ogive - plot at upper boundary)

7 Statistics Measures of Central Tendency Measures of Variation
Arithmetic Mean Median Mode Weighted mean Measures of Variation Range Variance Standard Deviation Coefficient of Variation The Empirical Rule

8 Arithmetic Mean and Variance -- Raw Data

9 Arithmetic Mean and Variance -- Grouped Data

10 Percentiles and Box-Plots
100pth percentile: value such that 100p% of the area under the relative frequency distribution lies below it. Q1: lower quartile (25% percentile) Q3: upper quartile (75% percentile) Box-Plots: limited by lower and upper quartiles Whiskers mark lowest and highest values within 1.5*IQR from Q1 or Q3 Outliers: Beyond 1.5*IQR from Q1 or Q3 (mark with *) z-scores - deviation from mean in units of standard deviation. Outlier: absolute value of z-score > 3

11 Probability: Basic Concepts
Experiment: A process of OBSERVATION Simple event - An OUTCOME of an experiment that can not be decomposed “Mutually exclusive” “Equally likely” Sample Space - The set of all possible outcomes Event “A” - The set of all possible simple events that result in the outcome “A”

12 Probability A measure of uncertainty of an estimate
The reliability of an inference Theoretical approach - “A Priori” Pr (Ai) = n/N n = number of possible ways “Ai” can be observed N = total number of possible outcomes Historical (empirical) approach - “A Posteriori” Pr (Ai) = n/N n = number of times “Ai” was observed N = total number of observations Subjective approach An “Expert Opinion”

13 Probability Rules n1* n2* ......* nk Multiplication Rule:
Number of ways to draw one element from set 1 which contains n1 elements, then an element from set 2, ...., and finally an element from set k (ORDER IS IMPORTANT!): n1* n2* * nk

14 Permutations and Combinations
Number of ways to draw r out of n elements WHEN ORDER IS IMPORTANT: Combinations: Number of ways to select r out of n items when order is NOT important

15 Compound Events

16 Conditional Probability

17 Other Probability Rules
Mutually Exclusive Events: Independence: A and B are said to be statistically INDEPENDENT if and only if:

18 Bayes’ Rule

19 Random Variables Random variable: A function that maps every possible outcome of an experiment into a numerical value. Discrete random variable: The function can assume a finite number of values Continuous random variable: The function can assume any value between two limits.

20 Probability Distribution for a Discrete Random Variable
Function that assigns a value to the probability p(y) associated to each possible value of the random variable y.

21 Poisson Process Events occur over time (or in a given area, volume, weight, distance, ...) Probability of observing an event in a given unit of time is constant Able to define a unit of time small enough so that we can’t observe two or more events simultaneously. Tables usually give CUMULATIVE values!

22 The Poisson Distribution

23 Poisson Approximation to the Binomial
In a binomial situation where n is very large (n > 25) and p is very small (p < 0.30, and np < 15), we can approximate b(x, n, p) by a Poisson with probability ( lambda = np)

24 Probability Distribution for a Continuous Random Variable
F( y0 ), is a cumulative distribution function that assigns a value to the probability of observing a value less or equal to y0

25 Probability Calculations

26 Expectations Properties of Expectations

27 The Uniform Distribution
A frequently used model when no data are available.

28 The Triangular Distribution
A good model to use when no data are available. Just ask an expert to estimate the minimum, maximum, and most likely values.

29 The Normal Distribution

30 The Lognormal Distribution
Consider this model when 80 percent of the data values lie in the first 20 % of the variable’s range.

31 The Gamma Distribution

32 The Erlang Distribution
A special case of the Gamma Distribution when A Poisson process where we are interested in the time to observe k events

33 The Exponential Distribution
A special case of the Gamma Distribution when

34 The Weibull Distribution
A good model for failure time distributions of manufactured items. It has a closed expression for F ( y ).

35 The Beta Distribution A good model for proportions. You can fit almost any data. However, the data set MUST be bounded!

36 Bivariate Data (Pairs of Random Variables)
Covariance: measures strength of linear relationship Correlation: a standardized version of the covariance Autocorrelation: For a single time series: Relationship between an observation and those immediately preceding it. Does current value (Xt) relate to itself lagged one period (Xt-1)?

37 Sampling Distributions
See slides 8 and 9 for formulas to calculate sample means and variances (raw data and grouped data, simultaneously).

38 The Sampling Distribution of the Mean (Central Limit Theorem)

39 The Sampling Distribution of Sums

40 Distributions Related to Variances

41 The t Distribution

42 Estimation Point and Interval Estimators
Properties of Point Estimators Unbiased: E (estimator) = estimated parameter Note: S2 is Unbiased if MVUE: Minimum Variance Unbiased Estimators Most frequently used method to estimate parameters: MLE - Maximum Likelihood Estimators.

43 Interval Estimators -- Large sample CI for mean

44 Interval Estimators -- Small sample CI for mean

45 Sample Size

46 CI for proportions (large samples)

47 Sample Size (proportions)

48 CI for the variance

49 CI for the Difference of Two Means -- large samples --

50 CI for (p1 - p2) --- (large samples)

51 CI for the Difference of Two Means -- small samples, same variance --

52 CI for the Difference of Two Means -small samples, different variances-

53 CI for the Difference of Two Means -- matched pairs --

54 CI for two variances

55 Prediction Intervals

56 Hypothesis Testing Elements of a Statistical Test. Focus on decisions made when comparing the observed sample to a claim (hypotheses). How do we decide whether the sample disagrees with the hypothesis? Null Hypothesis, H0. A claim about one or more population parameters. What we want to REJECT. Alternative Hypothesis, Ha: What we test against. Provides criteria for rejection of H0. Test Statistic: computed from sample data. Rejection (Critical) Region, indicates values of the test statistic for which we will reject H0.

57 Errors in Decision Making
True State of Nature H Ha Decision Dishonest client Honest client Do not lend Correct decision Type II error Lend Type I error Correct decision

58 Statistical Errors

59 Statistical Tests

60 The Critical Value

61 The observed significance level for a test

62 Testing proportions (large samples)

63 Testing a Normal Mean

64 Testing a variance

65 Testing Differences of Two Means -- large samples --

66 Testing Differences of Two Means -- small samples, same variance --

67 Testing Differences of Two Means -small samples, different variances-

68 Testing Difference of Two Means -- matched pairs --

69 Testing a ratio of two variances

70 Testing (p1 - p2) --- (large samples)

71 Categorical Data

72 One-way Tables (Cont.)

73 Categorical Data Analysis

74 Example of a Contingency Table

75 Testing for Independence

76 Distributions: Model Fitting Steps
Collect data. Make sure you have a random sample. You will need at least 30 valid cases Plot data. Look for familiar patterns Hypothesize several models for distribution Using part of the data, estimate model parameters Using the rest of the data, analyze the model’s accuracy Select the “best” model and implement it Keep track of model accuracy over time. If warranted, go back to 6 (or to 3, if data (population?) behavior keeps changing)

77 Chi-Square Test of Goodness of Fit

78 Kolmogorov-Smirnov Test of Goodness of Fit

79 A Review of Probability and Statistics
Descriptive statistics Probability Random variables Sampling distributions Estimation and confidence intervals Test of Hypothesis For mean, variances, and proportions Goodness of fit


Download ppt "A Review of Probability and Statistics"

Similar presentations


Ads by Google