1 Review of ANOVA & Inferences About The Pearson Correlation Coefficient Heibatollah Baghi, and Mastee Badii.

Slides:



Advertisements
Similar presentations
Forecasting Using the Simple Linear Regression Model and Correlation
Advertisements

Hypothesis Testing Steps in Hypothesis Testing:
Hypothesis: It is an assumption of population parameter ( mean, proportion, variance) There are two types of hypothesis : 1) Simple hypothesis :A statistical.
Econ 140 Lecture 81 Classical Regression II Lecture 8.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Correlation Mechanics. Covariance The variance shared by two variables When X and Y move in the same direction (i.e. their deviations from the mean are.
Inferential Statistics & Hypothesis Testing
Correlation. Introduction Two meanings of correlation –Research design –Statistical Relationship –Scatterplots.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
The Simple Regression Model
Regression and Correlation
SIMPLE LINEAR REGRESSION
Correlation. Two variables: Which test? X Y Contingency analysis t-test Logistic regression Correlation Regression.
Chapter Topics Types of Regression Models
Topics: Regression Simple Linear Regression: one dependent variable and one independent variable Multiple Regression: one dependent variable and two or.
CORRELATION COEFFICIENTS What Does a Correlation Coefficient Indicate? What is a Scatterplot? Correlation Coefficients What Could a Low r mean? What is.
Inferences About Process Quality
8-5 Testing a Claim About a Standard Deviation or Variance This section introduces methods for testing a claim made about a population standard deviation.
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
BCOR 1020 Business Statistics
Summary of Quantitative Analysis Neuman and Robson Ch. 11
Hypothesis Testing Using The One-Sample t-Test
Chapter 7 Forecasting with Simple Regression
Descriptive measures of the strength of a linear association r-squared and the (Pearson) correlation coefficient r.
Chapter 9 For Explaining Psychological Statistics, 4th ed. by B. Cohen 1 What is a Perfect Positive Linear Correlation? –It occurs when everyone has the.
1 PARAMETRIC VERSUS NONPARAMETRIC STATISTICS Heibatollah Baghi, and Mastee Badii.
Lecture 5 Correlation and Regression
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Topics: Significance Testing of Correlation Coefficients Inference about a population correlation coefficient: –Testing H 0 :  xy = 0 or some specific.
SIMPLE LINEAR REGRESSION
AM Recitation 2/10/11.
1 Level of Significance α is a predetermined value by convention usually 0.05 α = 0.05 corresponds to the 95% confidence level We are accepting the risk.
Bivariate Description Heibatollah Baghi, and Mastee Badii.
Correlation.
Correlation and Regression
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
Chapter 9 Hypothesis Testing and Estimation for Two Population Parameters.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
1 Testing Statistical Hypothesis for Dependent Samples.
© 2003 Prentice-Hall, Inc.Chap 13-1 Basic Business Statistics (9 th Edition) Chapter 13 Simple Linear Regression.
1 Chi-Square Heibatollah Baghi, and Mastee Badii.
1 Testing Statistical Hypothesis Independent Sample t-Test Heibatollah Baghi, and Mastee Badii.
Production Planning and Control. A correlation is a relationship between two variables. The data can be represented by the ordered pairs (x, y) where.
© 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
CORRELATIONS: TESTING RELATIONSHIPS BETWEEN TWO METRIC VARIABLES Lecture 18:
Copyright © Cengage Learning. All rights reserved. 13 Linear Correlation and Regression Analysis.
1 Inferences About The Pearson Correlation Coefficient.
ITEC6310 Research Methods in Information Technology Instructor: Prof. Z. Yang Course Website: c6310.htm Office:
STA 286 week 131 Inference for the Regression Coefficient Recall, b 0 and b 1 are the estimates of the slope β 1 and intercept β 0 of population regression.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Lecture 10: Correlation and Regression Model.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Inferences Concerning Variances
Regression Analysis. 1. To comprehend the nature of correlation analysis. 2. To understand bivariate regression analysis. 3. To become aware of the coefficient.
Chapter Eleven Performing the One-Sample t-Test and Testing Correlation.
1 Testing Statistical Hypothesis The One Sample t-Test Heibatollah Baghi, and Mastee Badii.
Chapter 13 Understanding research results: statistical inference.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
Chapter 9 Introduction to the t Statistic
Chi-Square (Association between categorical variables)
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Estimation & Hypothesis Testing for Two Population Parameters
Elementary Statistics
Chapter 11 Hypothesis Tests and Estimation for Population Variances
Different Scales, Different Measures of Association
RES 500 Academic Writing and Research Skills
Statistical Inference for the Mean: t-test
Presentation transcript:

1 Review of ANOVA & Inferences About The Pearson Correlation Coefficient Heibatollah Baghi, and Mastee Badii

2 Review of ANOVA (1) SourceSSDFMSFcFc FαFα Between Within Total11614

3 Review of ANOVA (2) Source SSDFMSFcFc FαFα Explained Unexplained Total

4 Review of ANOVA (3) S.V.SSDFMS F c F α Systematic Effect Random Effect Total 11614

5 Practical Significance or Effect Size in ANOVA Statistical significance does not provide information about the effect size in ANOVA. The index of effect size is η 2 (eta-squared) η 2 = SS B / SS T or η 2 = 70/116 = % of the variability in stress scores is explained by different treatments.

6 Practical Significance or Effect Size in ANOVA, Continued SourceSSDFMS F c F α η Between Within Total11614

7 Sample Size in ANOVA To estimate the minimum sample size needed in ANOVA, you need to do the power analysis. Given the: α =.05, effect size =.10, and a power ( 1- beta) of.80, 30 subjects per group would be needed. (Refer to Table 7-7, page 178).

8 Inferences About The Pearson Correlation Coefficient Refer to Session 5 GPA and SAT Example

9 STUDENTSY(GPA)X(SAT) A B C D E F G H I J K L Sum Mean S.D

10 Calculation of Covariance & Correlation

11 Population of visual acuity and neck size “scores” ρ=0 Sample 1 Etc Sample 2Sample 3 r = -0.8r = +.15r = +.02 Relative Frequency r: 0µr0µr The development of a sampling distribution of sample v:

12 Steps in Test of Hypothesis 1.Determine the appropriate test 2.Establish the level of significance:α 3.Determine whether to use a one tail or two tail test 4.Calculate the test statistic 5.Determine the degree of freedom 6.Compare computed test statistic against a tabled/critical value Same as Before

13 1. Determine the Appropriate Test Check assumptions: Both independent and dependent variable (X,Y) are measured on an interval or ratio level. Pearson’s r is suitable for detecting linear relationships between two variables and not appropriate as an index of curvilinear relationships. The variables are bivariate normal (scores for variable X are normally distributed for each value of variable Y, and vice versa) Scores must be homoscedastic (for each value of X, the variability of the Y scores must be about the same) Pearson’s r is robust with respect to the last two specially when sample size is large

14 2. Establish Level of Significance α is a predetermined value The convention α =.05 α =.01 α =.001

15 3. Determine Whether to Use a One or Two Tailed Test H 0 : ρ XY = 0 H a : ρ XY ≠ 0 H a : ρ XY > or < 0 Two Tailed Test if no direction is specified One Tailed Test if direction is specified

16 4. Calculating Test Statistics

17 5. Determine Degrees of Freedom For Pearson’s r df = N – 2

18 6. Compare the Computed Test Statistic Against a Tabled Value α =.05 Identify the Region (s) of Rejection. Look up t α corresponding to degrees of freedom

19 Formulate the Statistical Hypotheses. H o : ρ XY = 0 H a : ρ XY ≠ 0 α = 0.05 Collect a sample of data, n = 12 Example of Correlations Between SAT and GPA scores

20 Data

21 Calculation of Difference of Y and mean of Y

22 Calculation of Difference of X and Mean of X

23 Calculation of Product of Differences

24 Covariance & Correlation

25 Calculate t-statistics

26 Identify the Region (s) of Rejection. t α = Make Statistical Decision and Form Conclusion. t c < t α Fail to reject H o p-value = > α = 0.05 Fail to reject H o Or use Table B-6: r c = 0.50 < r α =.576 Fail to reject H o Check Significance

27 Practical Significance in Pearson r Judge the practical significance or the magnitude of r within the context of what you would expect to find, based on reason and prior studies. The magnitude of r is expressed in terms of r 2 or the coefficient of determination. In our example, r 2 is.50 2 =.25 (The proportion of variance that is shared by the two variables).

28 Intuitions about Percent of Variance Explained

29 Sample Size in Pearson r To estimate the minimum sample size needed in r, you need to do the power analysis. For example, Given the: α =.05, effect size (population r or ρ) = 0.20, and a power of.80, 197 subjects would be needed. (Refer to Table 9-1). Note: [ρ =.10 (small), ρ=.30 (medium), ρ =.50 (large)]

30 Magnitude of Correlations ρ =.10 (small) ρ =.30 (medium) ρ =.50 (large)

31 Factors Influencing the Pearson r Linearity. To the extent that a bivariate distribution departs from normality, correlation will be lower. Outliers. Discrepant data points affect the magnitude of the correlation. Restriction of Range. Restricted variation in either Y or X will result in a lower correlation. Unreliable Measures will results in a lower correlation.

32 Take Home Lesson How to calculate correlation and test if it is different from a constant