 # Final Review Session.

## Presentation on theme: "Final Review Session."— Presentation transcript:

Final Review Session

Exam details Short answer, similar to book problems
Formulae and tables will be given You CAN use a calculator Date and Time: Dec. 7, 2006, 12-1:30 pm Location: Osborne Centre, Unit 1 (”A”)

Things to Review Concepts Basic formulae Statistical tests

Things to Review Concepts Basic formulae Statistical tests

First Half Populations Samples Random sample Null hypothesis
Alternative hypothesis P-value Parameters Estimates Mean Median Mode Type I error Type II error Sampling distribution Standard error Variance Standard deviation Central limit theorem Categorical Nominal, ordinal Numerical Discrete, continuous

Second Half Normal distribution Simulation Quantile plot Randomization
Shapiro-Wilk test Data transformations Simulation Randomization Bootstrap Likelihood Nonparametric tests Independent contrasts Observations vs. experiments Confounding variables Control group Replication and pseudoreplication Blocking Factorial design Power analysis

Example Conceptual Questions
(you’ve just done a two-sample t-test comparing body size of lizards on islands and the mainland) What is the probability of committing a type I error with this test? State an example of a confounding variable that may have affected this result State one alternative statistical technique that you could have used to test the null hypothesis, and describe briefly how you would have carried it out.

Calculate the same test statistic on the randomized data
Randomization test Null hypothesis Randomized data Sample Calculate the same test statistic on the randomized data Test statistic Null distribution compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Things to Review Concepts Basic formulae Statistical tests

Things to Review Concepts Basic formulae Statistical tests

Sample Null hypothesis Test statistic Null distribution compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick reference summary: Binomial test
What is it for? Compares the proportion of successes in a sample to a hypothesized value, po What does it assume? Individual trials are randomly sampled and independent Test statistic: X, the number of successes Distribution under Ho: binomial with parameters n and po. Formula: P = 2 * Pr[xX] P(x) = probability of a total of x successes p = probability of success in each trial n = total number of trials

Binomial test Null hypothesis Pr[success]=po Sample Test statistic x = number of successes Null distribution Binomial n, po compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Binomial test

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick reference summary: 2 Goodness-of-Fit test
What is it for? Compares observed frequencies in categories of a single variable to the expected frequencies under a random model What does it assume? Random samples; no expected values < 1; no more than 20% of expected values < 5 Test statistic: 2 Distribution under Ho: 2 with df=# categories - # parameters - 1 Formula:

Discrete distribution
2 goodness of fit test Null hypothesis: Data fit a particular Discrete distribution Sample Calculate expected values Test statistic Null distribution: 2 With N-1-param. d.f. compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

2 Goodness-of-Fit test

Possible distributions
Pr[x] = n * frequency of occurrence

Proportional Binomial Poisson Given a number of categories
Probability proportional to number of opportunities Days of the week, months of the year Proportional Number of successes in n trials Have to know n, p under the null hypothesis Punnett square, many p=0.5 examples Binomial Number of events in interval of space or time n not fixed, not given p Car wrecks, flowers in a field Poisson

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick reference summary: 2 Contingency Test
What is it for? Tests the null hypothesis of no association between two categorical variables What does it assume? Random samples; no expected values < 1; no more than 20% of expected values < 5 Test statistic: 2 Distribution under Ho: 2 with df=(r-1)(c-1) where r = # rows, c = # columns Formulae:

2 Contingency Test Null hypothesis: Sample No association
between variables Sample Calculate expected values Test statistic Null distribution: 2 With (r-1)(c-1) d.f. compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

2 Contingency test

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick reference summary: One sample t-test
What is it for? Compares the mean of a numerical variable to a hypothesized value, μo What does it assume? Individuals are randomly sampled from a population that is normally distributed. Test statistic: t Distribution under Ho: t-distribution with n-1 degrees of freedom. Formula:

One-sample t-test Null hypothesis The population mean is equal to o Sample Null distribution t with n-1 df Test statistic compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

One-sample t-test Ho: The population mean is equal to o
Ha: The population mean is not equal to o

Paired vs. 2 sample comparisons

Quick reference summary: Paired t-test
What is it for? To test whether the mean difference in a population equals a null hypothesized value, μdo What does it assume? Pairs are randomly sampled from a population. The differences are normally distributed Test statistic: t Distribution under Ho: t-distribution with n-1 degrees of freedom, where n is the number of pairs Formula:

*n is the number of pairs
Paired t-test Null hypothesis The mean difference is equal to o Sample Null distribution t with n-1 df *n is the number of pairs Test statistic compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Paired t-test Ho: The mean difference is equal to 0
Ha: The mean difference is not equal 0

Quick reference summary: Two-sample t-test
What is it for? Tests whether two groups have the same mean What does it assume? Both samples are random samples. The numerical variable is normally distributed within both populations. The variance of the distribution is the same in the two populations Test statistic: t Distribution under Ho: t-distribution with n1+n2-2 degrees of freedom. Formulae:

Two-sample t-test Null hypothesis The two populations have the same mean 12 Sample Null distribution t with n1+n2-2 df Test statistic compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Two-sample t-test Ho: The means of the two populations are equal
Ha: The means of the two populations are not equal

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

F-test for Comparing the variance of two groups

F-test Null hypothesis The two populations have the same variance 21 22 Sample Null distribution F with n1-1, n2-1 df Test statistic compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Welch’s t-test Null hypothesis The two populations have the same mean 12 Sample Null distribution t with df from formula Test statistic compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Parametric Nonparametric One-sample and Paired t-test Sign test Mann-Whitney U-test Two-sample t-test

Quick Reference Summary: Sign Test
What is it for? A non-parametric test to compare the medians of a group to some constant What does it assume? Random samples Formula: Identical to a binomial test with po= Uses the number of subjects with values greater than and less than a hypothesized median as the test statistic. P = 2 * Pr[xX] P(x) = probability of a total of x successes p = probability of success in each trial n = total number of trials

Sign test Null hypothesis Median = mo Sample Test statistic x = number of values greater than mo Null distribution Binomial n, 0.5 compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Sign Test Ho: The median is equal to some value mo
Ha: The median is not equal to mo

Quick Reference Summary: Mann-Whitney U Test
What is it for? A non-parametric test to compare the central tendencies of two groups What does it assume? Random samples Test statistic: U Distribution under Ho: U distribution, with sample sizes n1 and n2 Formulae: n1= sample size of group 1 n2= sample size of group 2 R1= sum of ranks of group 1 Use the larger of U1 or U2 for a two-tailed test

Mann-Whitney U test Null hypothesis The two groups Have the same median Sample Test statistic U1 or U2 (use the largest) Null distribution U with n1, n2 compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick Reference Guide - Correlation Coefficient
What is it for? Measuring the strength of a linear association between two numerical variables What does it assume? Bivariate normality and random sampling Parameter:  Estimate: r Formulae:

Quick Reference Guide - t-test for zero linear correlation
What is it for? To test the null hypothesis that the population parameter, , is zero What does it assume? Bivariate normality and random sampling Test statistic: t Null distribution: t with n-2 degrees of freedom Formulae:

T-test for correlation
Null hypothesis =0 Sample Test statistic Null distribution t with n-2 d.f. compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick Reference Guide - Spearman’s Rank Correlation
What is it for? To test zero correlation between the ranks of two variables What does it assume? Linear relationship between ranks and random sampling Test statistic: rs Null distribution: See table; if n>100, use t-distribution Formulae: Same as linear correlation but based on ranks

Spearman’s rank correlation
Null hypothesis =0 Sample Test statistic rs Null distribution Spearman’s rank Table H compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Assumptions of Regression
At each value of X, there is a population of Y values whose mean lies on the “true” regression line At each value of X, the distribution of Y values is normal The variance of Y values is the same at all values of X At each value of X the Y measurements represent a random sample from the population of Y values

OK Non-normal Unequal variance Non-linear

Quick Reference Summary: Confidence Interval for Regression Slope
What is it for? Estimating the slope of the linear equation Y =  + X between an explanatory variable X and a response variable Y What does it assume? Relationship between X and Y is linear; each Y at a given X is a random sample from a normal distribution with equal variance Parameter:  Estimate: b Degrees of freedom: n-2 Formulae:

Quick Reference Summary: t-test for Regression Slope
What is it for? To test the null hypothesis that the population parameter  equals a null hypothesized value, usually 0 What does it assume? Same as regression slope C.I. Test statistic: t Null distribution: t with n-2 d.f. Formula:

T-test for Regression Slope
Null hypothesis =0 Sample Test statistic Null distribution t with n-2 df compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Statistical tests Binomial test Chi-squared goodness-of-fit
Proportional, binomial, poisson Chi-squared contingency test t-tests One-sample t-test Paired t-test Two-sample t-test F-test for comparing variances Welch’s t-test Sign test Mann-Whitney U Correlation Spearman’s r Regression ANOVA

Quick Reference Summary: ANOVA (analysis of variance)
What is it for? Testing the difference among k means simultaneously What does it assume? The variable is normally distributed with equal standard deviations (and variances) in all k populations; each sample is a random sample Test statistic: F Distribution under Ho: F distribution with k-1 and N-k degrees of freedom

Quick Reference Summary: ANOVA (analysis of variance)
Formulae: = mean of group i = overall mean ni = size of sample i N = total sample size

ANOVA Null hypothesis All groups have the same mean k Samples Test statistic Null distribution F with k-1, N-k df compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

ANOVA Ho: All of the groups have the same mean
Ha: At least one of the groups has a mean that differs from the others

ANOVA Tables k-1 N-k N-1 Source of variation Sum of squares df
Mean Squares F ratio P Treatment k-1 Error N-k Total N-1

Picture of ANOVA Terms SSTotal MSTotal SSGroup MSGroup SSError MSError

Two-factor ANOVA Table
Source of variation Sum of Squares df Mean Square F ratio P Treatment 1 SS1 k1 - 1 MS1 MSE Treatment 2 SS2 k2 - 1 MS2 Treatment 1 * Treatment 2 SS1*2 (k1 - 1)*(k2 - 1) MS1*2 Error SSerror XXX Total SStotal N-1

Interpretations of 2-way ANOVA Terms

Interpretations of 2-way ANOVA Terms
Effect of Temperature, Not pH

Interpretations of 2-way ANOVA Terms
Effect of pH, Not Temperature

Interpretations of 2-way ANOVA Terms
Effect of pH and Temperature, No interaction

Interpretations of 2-way ANOVA Terms
Effect of pH and Temperature, with interaction

Quick Reference Summary: 2-Way ANOVA
What is it for? Testing the difference among means from a 2-way factorial experiment What does it assume? The variable is normally distributed with equal standard deviations (and variances) in all populations; each sample is a random sample Test statistic: F (for three different hypotheses) Distribution under Ho: F distribution

Quick Reference Summary: 2-Way ANOVA
Formulae: Just need to know how to fill in the table

2-way ANOVA Null hypotheses (three of them) Samples Test statistic Null distribution F compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Treatment 1 2-way ANOVA Null hypotheses (three of them) Samples
Test statistic Null distribution F compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Treatment 2 2-way ANOVA Null hypotheses (three of them) Samples
Test statistic Null distribution F compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Interaction 2-way ANOVA Null hypotheses (three of them) Samples
Test statistic Null distribution F compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

General Linear Models First step: formulate a model statement Example:

General Linear Models Second step: Make an ANOVA table Example: k-1
Source of variation Sum of squares df Mean Squares F ratio P Treatment k-1 Error N-k Total N-1 *

Calculate the same test statistic on the randomized data
Randomization test Null hypothesis Randomized data Sample Calculate the same test statistic on the randomized data Test statistic Null distribution compare How unusual is this test statistic? P > 0.05 P < 0.05 Reject Ho Fail to reject Ho

Which test do I use?

Methods for a single variable 1 How many variables am I comparing? 2 Methods for comparing two variables

Methods for a single variable 1 How many variables am I comparing? 2 Methods for comparing two variables 3 Methods for comparing three or more variables

Methods for one variable
Is the variable categorical or numerical? Categorical Comparing to a single proportion po or to a distribution? Numerical po distribution One-sample t-test 2 Goodness- of-fit test Binomial test

Methods for two variables
X Contingency analysis Logistic regression Y t-test ANOVA Correlation Regression

How many variables am I comparing? 1 2 Categorical Numerical
Is the variable categorical or numerical? Categorical Contingency analysis Logistic regression Numerical Comparing to a single proportion po or to a distribution? t-test ANOVA Correlation Regression One-sample t-test po distribution Contingency analysis 2 Goodness- of-fit test Binomial test