A Review of Probability and Statistics Descriptive statistics Probability Random variables Sampling distributions Estimation and confidence intervals Test of Hypothesis For mean, variances, and proportions Goodness of fit
Key Concepts Finite Infinite Population -- "parameters" Sample -- "statistics" Random samples - Your MOST important decision!
Data Deterministic vs. Probabilistic (Stochastic) Discrete or Continuous: Whether a variable is continuous (measured) or discrete (counted) is a property of the data, not of the measuring device: weight is a continuous variable, even if your scale can only measure values to the pound. Data description: Category frequency Category relative frequency
Nominal -- I E = 1 ; EE = 2 ; CE = 3 Data Types Qualitative (Categorical) Nominal -- I E = 1 ; EE = 2 ; CE = 3 Ordinal -- poor = 1 ; fair = 2 ; good = 3 ; excellent = 4 Quantitative (Numerical) Interval -- temperature, viscosity Ratio -- weight, height The type of statistics you can calculate depends on the data type. Average, median, and variance make no sense if the data is categorical (proportions do).
Data Presentation for Qualitative Data Rules: Each observation MUST fall in one and only one category. All observations must be accounted for. Table -- Provides greater detail Bar graphs -- Consider Pareto presentation! Pie charts (do not need to be round)
Data Presentation for Quantitative Data Consider a Stem-and-Leaf Display Use 5 to 20 classes (intervals, groups). Cell width, boundaries, limits, and midpoint Histograms Discrete Continuous (frequency polygon - plot at class mark) Cumulative frequency distribution (Ogive - plot at upper boundary)
Statistics Measures of Central Tendency Measures of Variation Arithmetic Mean Median Mode Weighted mean Measures of Variation Range Variance Standard Deviation Coefficient of Variation The Empirical Rule
Arithmetic Mean and Variance -- Raw Data
Arithmetic Mean and Variance -- Grouped Data
Percentiles and Box-Plots 100pth percentile: value such that 100p% of the area under the relative frequency distribution lies below it. Q1: lower quartile (25% percentile) Q3: upper quartile (75% percentile) Box-Plots: limited by lower and upper quartiles Whiskers mark lowest and highest values within 1.5*IQR from Q1 or Q3 Outliers: Beyond 1.5*IQR from Q1 or Q3 (mark with *) z-scores - deviation from mean in units of standard deviation. Outlier: absolute value of z-score > 3
Probability: Basic Concepts Experiment: A process of OBSERVATION Simple event - An OUTCOME of an experiment that can not be decomposed “Mutually exclusive” “Equally likely” Sample Space - The set of all possible outcomes Event “A” - The set of all possible simple events that result in the outcome “A”
Probability A measure of uncertainty of an estimate The reliability of an inference Theoretical approach - “A Priori” Pr (Ai) = n/N n = number of possible ways “Ai” can be observed N = total number of possible outcomes Historical (empirical) approach - “A Posteriori” Pr (Ai) = n/N n = number of times “Ai” was observed N = total number of observations Subjective approach An “Expert Opinion”
Probability Rules n1* n2* ......* nk Multiplication Rule: Number of ways to draw one element from set 1 which contains n1 elements, then an element from set 2, ...., and finally an element from set k (ORDER IS IMPORTANT!): n1* n2* ......* nk
Permutations and Combinations Number of ways to draw r out of n elements WHEN ORDER IS IMPORTANT: Combinations: Number of ways to select r out of n items when order is NOT important
Compound Events
Conditional Probability
Other Probability Rules Mutually Exclusive Events: Independence: A and B are said to be statistically INDEPENDENT if and only if:
Bayes’ Rule
Random Variables Random variable: A function that maps every possible outcome of an experiment into a numerical value. Discrete random variable: The function can assume a finite number of values Continuous random variable: The function can assume any value between two limits.
Probability Distribution for a Discrete Random Variable Function that assigns a value to the probability p(y) associated to each possible value of the random variable y.
Poisson Process Events occur over time (or in a given area, volume, weight, distance, ...) Probability of observing an event in a given unit of time is constant Able to define a unit of time small enough so that we can’t observe two or more events simultaneously. Tables usually give CUMULATIVE values!
The Poisson Distribution
Poisson Approximation to the Binomial In a binomial situation where n is very large (n > 25) and p is very small (p < 0.30, and np < 15), we can approximate b(x, n, p) by a Poisson with probability ( lambda = np)
Probability Distribution for a Continuous Random Variable F( y0 ), is a cumulative distribution function that assigns a value to the probability of observing a value less or equal to y0
Probability Calculations
Expectations Properties of Expectations
The Uniform Distribution A frequently used model when no data are available.
The Triangular Distribution A good model to use when no data are available. Just ask an expert to estimate the minimum, maximum, and most likely values.
The Normal Distribution
The Lognormal Distribution Consider this model when 80 percent of the data values lie in the first 20 % of the variable’s range.
The Gamma Distribution
The Erlang Distribution A special case of the Gamma Distribution when A Poisson process where we are interested in the time to observe k events
The Exponential Distribution A special case of the Gamma Distribution when
The Weibull Distribution A good model for failure time distributions of manufactured items. It has a closed expression for F ( y ).
The Beta Distribution A good model for proportions. You can fit almost any data. However, the data set MUST be bounded!
Bivariate Data (Pairs of Random Variables) Covariance: measures strength of linear relationship Correlation: a standardized version of the covariance Autocorrelation: For a single time series: Relationship between an observation and those immediately preceding it. Does current value (Xt) relate to itself lagged one period (Xt-1)?
Sampling Distributions See slides 8 and 9 for formulas to calculate sample means and variances (raw data and grouped data, simultaneously).
The Sampling Distribution of the Mean (Central Limit Theorem)
The Sampling Distribution of Sums
Distributions Related to Variances
The t Distribution
Estimation Point and Interval Estimators Properties of Point Estimators Unbiased: E (estimator) = estimated parameter Note: S2 is Unbiased if MVUE: Minimum Variance Unbiased Estimators Most frequently used method to estimate parameters: MLE - Maximum Likelihood Estimators.
Interval Estimators -- Large sample CI for mean
Interval Estimators -- Small sample CI for mean
Sample Size
CI for proportions (large samples)
Sample Size (proportions)
CI for the variance
CI for the Difference of Two Means -- large samples --
CI for (p1 - p2) --- (large samples)
CI for the Difference of Two Means -- small samples, same variance --
CI for the Difference of Two Means -small samples, different variances-
CI for the Difference of Two Means -- matched pairs --
CI for two variances
Prediction Intervals
Hypothesis Testing Elements of a Statistical Test. Focus on decisions made when comparing the observed sample to a claim (hypotheses). How do we decide whether the sample disagrees with the hypothesis? Null Hypothesis, H0. A claim about one or more population parameters. What we want to REJECT. Alternative Hypothesis, Ha: What we test against. Provides criteria for rejection of H0. Test Statistic: computed from sample data. Rejection (Critical) Region, indicates values of the test statistic for which we will reject H0.
Errors in Decision Making True State of Nature H0 Ha Decision Dishonest client Honest client Do not lend Correct decision Type II error Lend Type I error Correct decision
Statistical Errors
Statistical Tests
The Critical Value
The observed significance level for a test
Testing proportions (large samples)
Testing a Normal Mean
Testing a variance
Testing Differences of Two Means -- large samples --
Testing Differences of Two Means -- small samples, same variance --
Testing Differences of Two Means -small samples, different variances-
Testing Difference of Two Means -- matched pairs --
Testing a ratio of two variances
Testing (p1 - p2) --- (large samples)
Categorical Data
One-way Tables (Cont.)
Categorical Data Analysis
Example of a Contingency Table
Testing for Independence
Distributions: Model Fitting Steps Collect data. Make sure you have a random sample. You will need at least 30 valid cases Plot data. Look for familiar patterns Hypothesize several models for distribution Using part of the data, estimate model parameters Using the rest of the data, analyze the model’s accuracy Select the “best” model and implement it Keep track of model accuracy over time. If warranted, go back to 6 (or to 3, if data (population?) behavior keeps changing)
Chi-Square Test of Goodness of Fit
Kolmogorov-Smirnov Test of Goodness of Fit
A Review of Probability and Statistics Descriptive statistics Probability Random variables Sampling distributions Estimation and confidence intervals Test of Hypothesis For mean, variances, and proportions Goodness of fit