Simple Statistical Designs One Dependent Variable.

Slides:



Advertisements
Similar presentations
Test of (µ 1 – µ 2 ),  1 =  2, Populations Normal Test Statistic and df = n 1 + n 2 – 2 2– )1– 2 ( 2 1 )1– 1 ( 2 where ] 2 – 1 [–
Advertisements

Hypothesis Testing Steps in Hypothesis Testing:
Bivariate Analyses.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
More on ANOVA. Overview ANOVA as Regression Comparison Methods.
PSY 307 – Statistics for the Behavioral Sciences
Correlation and Regression. Spearman's rank correlation An alternative to correlation that does not make so many assumptions Still measures the strength.
Lecture 10 PY 427 Statistics 1 Fall 2006 Kin Ching Kong, Ph.D
Final Review Session.
PSYC512: Research Methods PSYC512: Research Methods Lecture 19 Brian P. Dyre University of Idaho.
Lecture 9: One Way ANOVA Between Subjects
Inferences About Means of Two Independent Samples Chapter 11 Homework: 1, 2, 4, 6, 7.
Topics: Regression Simple Linear Regression: one dependent variable and one independent variable Multiple Regression: one dependent variable and two or.
Chapter 11 Multiple Regression.
One-way Between Groups Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
Today Concepts underlying inferential statistics
Summary of Quantitative Analysis Neuman and Robson Ch. 11
Review for Exam 2 Some important themes from Chapters 6-9 Chap. 6. Significance Tests Chap. 7: Comparing Two Groups Chap. 8: Contingency Tables (Categorical.
Chapter 14 Inferential Data Analysis
Relationships Among Variables
Statistical hypothesis testing – Inferential statistics II. Testing for associations.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Inferential Statistics
Lecture 16 Correlation and Coefficient of Correlation
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
ANALYSIS OF VARIANCE. Analysis of variance ◦ A One-way Analysis Of Variance Is A Way To Test The Equality Of Three Or More Means At One Time By Using.
Inferential Statistics: SPSS
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Selecting the Correct Statistical Test
Overview of Meta-Analytic Data Analysis
ANOVA Greg C Elvers.
Comparing Means. Anova F-test can be used to determine whether the expected responses at the t levels of an experimental factor differ from each other.
Discriminant Function Analysis Basics Psy524 Andrew Ainsworth.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
Which Test Do I Use? Statistics for Two Group Experiments The Chi Square Test The t Test Analyzing Multiple Groups and Factorial Experiments Analysis of.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Statistics 11 Correlations Definitions: A correlation is measure of association between two quantitative variables with respect to a single individual.
Chi-Square as a Statistical Test Chi-square test: an inferential statistics technique designed to test for significant relationships between two variables.
Simple Linear Regression One reason for assessing correlation is to identify a variable that could be used to predict another variable If that is your.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Chapter 13 Analysis of Variance (ANOVA) PSY Spring 2003.
Psychology 301 Chapters & Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons.
Chapter 20 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 These tests can be used when all of the data from a study has been measured on.
Jeopardy Opening Robert Lee | UOIT Game Board $ 200 $ 200 $ 200 $ 200 $ 200 $ 400 $ 400 $ 400 $ 400 $ 400 $ 10 0 $ 10 0 $ 10 0 $ 10 0 $ 10 0 $ 300 $
Linear correlation and linear regression + summary of tests
Education 793 Class Notes Presentation 10 Chi-Square Tests and One-Way ANOVA.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
ITEC6310 Research Methods in Information Technology Instructor: Prof. Z. Yang Course Website: c6310.htm Office:
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Adjusted from slides attributed to Andrew Ainsworth
Experimental Research Methods in Language Learning Chapter 10 Inferential Statistics.
Chapter 10 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 A perfect correlation implies the ability to predict one score from another perfectly.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 13: One-way ANOVA Marshall University Genomics Core.
Advanced Statistical Methods: Continuous Variables REVIEW Dr. Irina Tomescu-Dubrow.
Remember You just invented a “magic math pill” that will increase test scores. On the day of the first test you give the pill to 4 subjects. When these.
Kin 304 Inferential Statistics Probability Level for Acceptance Type I and II Errors One and Two-Tailed tests Critical value of the test statistic “Statistics.
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
Formula for Linear Regression y = bx + a Y variable plotted on vertical axis. X variable plotted on horizontal axis. Slope or the change in y for every.
Nonparametric Statistics
Biostatistics Regression and Correlation Methods Class #10 April 4, 2000.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
Interpretation of Common Statistical Tests Mary Burke, PhD, RN, CNE.
REGRESSION G&W p
CHAPTER 29: Multiple Regression*
Presentation transcript:

Simple Statistical Designs One Dependent Variable

Is your Dependent Variable (DV) continuous? YES NO Is your Independent Variable (IV) continuous? Correlation or Linear Regression YES Do you have only 2 treatments? NO Logistic Regression Chi Square NO T-testANOVA If I have one Dependent Variable, which statistical test do I use?

Chi Square

Chi Square (χ 2 )  Non-parametric: no parameters estimated from the sample  Chi Square is a distribution with one parameter: degrees of freedom (df).  Positively skewed but skew decreases with df.  Mean is df  Goodness-of-fit and Independence Tests

Chi-Square Goodness of Fit Test  How well do observed proportions or frequencies fit theoretically expected proportions or frequencies?  Example: Was test performance better than chance? χ 2 =Σ (Observed – Expected) 2 df = # groups -1 Expected ObservedExpected Correct6250 Incorrect3850

Chi Square Test for Independence  Is distribution of one variable contingent on another variable?  Contingency Table  df = (#Rows -1)(#Columns-1)  Example: H o : depression & gender are independent H 1 : depression and gender are not independent MaleFemaleTotal Depressed10(15)20(15)30 Not Depressed 40(35)30(35)70 Total

Chi Square Test for Independence Same χ 2 formula except expected frequencies are derived from the row and column totals: cell proportion X Total = (30/100)(50/100)(100) χ 2 = (10-15) 2 + (20-15) 2 + (40-35) 2 + (30-15) 2 = Critical χ 2 with 1 df = 3.84 at p=.05 Reject H o : depression and gender are NOT independent MaleFemaleTotal Depressed10(15)20(15)30 Not Depressed 40(35)30(35)70 Total

Assumptions of Chi Square  Independence of observations  Categories are mutually exclusive  Sampling distribution in each cell is normal  Violated if expected frequencies are very low ( 20.  Fisher’s Exact Test can correct for violations of these assumptions in 2x2 designs.

Correlation and Regression

Recall the Bivariate Distribution Recall the Bivariate Distribution r = -.17 p=.09

Interpretation of r  Slope of best fitting straight regression line when variables are standardized  measure of the strength of the relationship between 2 variables  r 2 measures proportion of variability in one measure that can be explained by the other  1-r 2 measures the proportion of unexplained variability.

Correlation Coefficients Coefficient Variable 1 Type Variable 2 Type Pearson r continuouscontinuous Point Biserial continuousdichotomy Phi Coefficient dichotomydichotomy Biserialcontinuous Artificial dichotomy Tetrachoric Spearman’s Rho ranksranks

Simple Regression  Prediction: What is the best prediction of variable X?  Regress Y on X (i.e. regress outcome on predictor)  CorrelationRegression.html CorrelationRegression.html

The fit of a straight line  The straight line is a summary of a bivariate distribution  Y = a + bx + ε  DV = intercept + slope(IV) + error  Least Squares Fit: minimize error by minimizing sum of squared deviations: Σ(Actual Y - Predicted Y) 2  Regression lines ALWAYS pass through the mean of X and mean of Y

b  Slope: the magnitude of change in Y for a 1 unit change in X  Beta= b = r(SD y / SD x )  Because of this relationship: Z y = r Z x  Standardized beta: if X and Y are converted to Z scores, this would be the beta – not interpretable as slope.

Residuals  The error in the estimate of the regression line  Mean is always 0  Residual plots are very informative – tell you how well your line fits the data  Linear Regression Applet Linear Regression Applet Linear Regression Applet

Assumptions & Violations Linear Regression Applet Linear Regression Applet Linear Regression Applet  Homoscedasticity: uniform variance across whole bivariate distribution.  Bivariate outlier: not outlier on either X or Y  Influential Outliers: ones that move the regression line  Y is Independent and Normally distributed at all points along line (residuals are normally distributed)  Omission of important variables  Non-linear relationship of X and Y  Mismatched distributions (i.e. neg skew and pos skew – but you already corrected those with transformations, right?)  Group membership (i.e. neg r within groups, pos r across groups)

Logistic Regression  Continuous predictor(s) but DV is now dichotomous.  Predicts probability of dichotomous outcome (i.e. pass/fail, recover/relapse)  Not least squares but maximum likelihood estimate  Fewer assumptions than multiple regression  “Reverse” of ANOVA  Similar to Discriminant Function Analysis that predicts nominal-scaled DVs of > 2 categories

T-test  Similar to Z but with estimates instead of actual population parameters mean1 – mean2 pooled within-group SD  One- or two-tailed, use one-tailed if you can justify through hypothesis - more power  Effect size is Cohen’s d

One Sample t-test Compare mean of one variable to a specific value (i.e. Is IQ in your sample different from national norm?) Sample mean –

Independent Sample t-test  Are 2 groups significantly different from each other?  Assumes independence of groups, normality in both populations, and equal variances (although T is robust against violations of normality).  Pooled variance = mean of variances (or weighted by df if variances are unequal)  If N’s unequal, use Welch t-test

Dependent Samples t-test (aka Paired Samples t-test)  Dependent Samples:  Same subjects, same variables  Same subjects, different variables  Related subjects, same variables (i.e. mom and child)  More powerful: pooled variance (denominator) is smaller  But fewer df, higher critical t

Univariate (aka One-Way) ANOVA AnalysisofVariance  2 or more levels of a factor  ANOVA tests H o that means of each level are equal  Significant F only indicates that the means are not equal.

F  F statistic = t 2 = Between Group Variance = signal Within Group Variance noise Robust against violations of normality unless n is small Robust against violations of homogeneity of variances unless n’s are unequal If n’s are unequal, use Welch F’ or Brown-Forsythe F*

Effect size  Large F does NOT equal large effect  Eta Squared (η 2 ): Sum-of-Squares between Sum-of-squares Total Sum-of-squares Total Variance proportion estimate Positively biased – OVERestimates true effect  Omega squared (ω 2 ) adjusts for within factor variability and is better estimate

Family-wise error  F is a non-directional, omnibus test and provides no info about specific comparisons between factors. In fact, a non-significant omnibus F does not mean that there are not significant differences between specific means.  However, you can’t just run a separate test for each comparison – each independent test has an error rate (α).  Family-wise error rate = 1 – (1- α) c, where c = # comparisons  Example: 3 comparisons with α=.05 1 – (1-.05) 3 =.143

Contrasts  A linear combination of contrast coefficients (weights) on the means of each level of the factor Control Drug 1 Drug 2 mean10205 To contrast the Control group against the Drug 1 group, the contrast would look like this: Contrast = 1(Control) + (-1)(Drug 1) + 0(Drug 2)

Unplanned (Post-hoc) Contrasts  Risk of Family-wise error  Correct with:  Bonferoni inequailty: multiply α by # comparisons  Tukey’s Honest Significant Difference (HSD): minimum difference between means necessary for significance  Scheffe test: critical F’ = (#groups-1)(F) ultraconservative

Planned Contrasts  Polynomial: linear, quadratic, cubic, etc. pattern of means across levels of the factor  Orthogonal: sum of contrast coefficients (weights) equals 0.  Non-orthogonal: sum of contrast coefficients does not equal 0

Polynomial Contrasts (aka Trend Analysis)  Special case of orthogonal contrasts but IV must be ordered (e.g. time, age, drug, dosage) LinearQuadraticCubicQuartic

Orthogonal Contrasts  Deviation : Compares the mean of each level (except one) to the mean of all of the levels (grand mean). Levels of the factor can be in any order. Control Drug 1 Drug 2 Grand Mean

Orthogonal Contrasts Simple: Compares the mean of each level to the mean of a specified level. This type of contrast is useful when there is a control group. You can choose the first or last category as the reference. Control Drug 1 Drug 2 Grand Mean

Orthogonal Contrasts Helmert : Compares the mean of each level of the factor (except the last) to the mean of subsequent levels combined. Control Drug 1 Drug 2 Grand Mean

Orthogonal Contrasts Difference : Compares the mean of each level (except the first) to the mean of previous levels. (aka reverse Helmert contrasts.) Control Drug 1 Drug 2 Grand Mean

Orthogonal Contrasts Repeated : Compares the mean of each level (except the last) to the mean of the subsequent level. Control Drug 1 Drug 2 Grand Mean

Non-orthogonal Contrasts  Not used often  Dunn’s test (Bonforoni t): controls for family-wise error rate by multiplying α by the number of comparisons.  Dunnett’s test: use t-test but critical t values come from a different table (Dunnett’s) that restricts family-wise error.