Presentation is loading. Please wait.

Presentation is loading. Please wait.

POSC 202A: Lecture 12/10 Announcements: “Lab” Tomorrow; Final emailed out tomorrow or Friday. I will make it due Wed, 5pm. Aren’t I tender? Lecture: Substantive.

Similar presentations


Presentation on theme: "POSC 202A: Lecture 12/10 Announcements: “Lab” Tomorrow; Final emailed out tomorrow or Friday. I will make it due Wed, 5pm. Aren’t I tender? Lecture: Substantive."— Presentation transcript:

1 POSC 202A: Lecture 12/10 Announcements: “Lab” Tomorrow; Final emailed out tomorrow or Friday. I will make it due Wed, 5pm. Aren’t I tender? Lecture: Substantive Significance, Relationship between Variables

2 Substantive Significance

3 Statistical Significance vs. Substantive Importance Statistical significance speaks to how confident or certain we are that the results are not the product of chance. Substantive importance speaks to how large the results are and the extent to which they matter.

4 Statistical Significance vs. Substantive Importance Recall that it is easier to attain statistical significance as the sample we take gets larger. The denominator in significance tests depends on the SD which is heavily influenced by the sample size. Dividing by a smaller denominator leads to a larger test statistic (Z Score).

5 Statistical Significance vs. Substantive Importance But, none of this speaks to whether the size of the effect we observe is large or small, or whether it matters for some social process. All we are saying with a statistically significant result is that the results are not likely the product of chance. Statistical significance is about confidence NOT importance.

6 Statistical Significance vs. Substantive Importance In the context of regression, the question of substantive significance is always: Is the slope big? What’s big or not is always to some degree subjective. But the question is only to some degree subjective. It is also partly objective. The challenge of assessing substantive significance is to make a correct assessment with respect to the objective part and a reasonable assessment with respect to the subjective part.

7 Relationships among Variables We began this course by talking about how to describe data, where we were examining one variable. Measures of central tendency Measures of dispersion Now we can apply these same concepts to relationships among and between variables.

8 Relationships among Variables Association- Variables are associated if larger (or smaller) values of one variable occur more frequently with larger (or smaller) values of other variables.

9 Relationships among Variables How to describe relationships? 1.Tables 2.Graphs 3.summary statistics.

10 Relationships among Variables Tables-describe simple relationships. Usually if you don’t see a relationship in a simple table you wont find it using more complex methods. Look to diagonals LowHigh Total Low 25% (25) 25% (25) 50% High 25% (25) 25% (25) 50% Total 50% LowHigh Total Low 50% (50) 0%50% High 0%50% (50) 50% Total 50% Income Education Null Hypothesis (expected)Alternative Hypothesis (Observed)

11 Relationships among Variables Tables Relationships often summarized using the chi-squared statistic Where the observed and expected are calculated for each cell and this result is added across all cells. We treat this statistic as we would a Z score, but use the distribution to determine significance (page T-20).

12 Relationships among Variables So for the table of income and education we would get the following result: To find the area in the tail multiply the number of rows -1 by the number of columns-1 or (r-1)*(c-1) to find the df and use that line in table F on page T-20

13 Relationships among Variables Potential Problems: The unit of analysis can conceal important factors. Simpson’s Paradox- The direction of a relationship can change when the level of data analysis goes from individual to group levels of association.

14 Relationships among Variables University Admissions Decisions MaleFemale Admit35002000 Deny45004000 Total80006000 Acceptance Rates Female: 2000/6000= 33% Male: 3500/8000= 44% Is there bias?

15 Relationships among Variables University Admissions Decisions by College Acceptance Rates Male: 50% Female: 50% Is there bias? What is going on? Sciences MaleFemale Admit30001000 Deny30001000 Total60002000 Male Female Admit5001000 Deny15003000 Total20004000 Acceptance rates Male: 25% Female: 25% Humanities

16 Relationships among Variables Lurking Variables: A variable not included in the analysis but that affects the result. In this case it was that men and women had preferred different majors. Men preferred the easier to get into major.

17 Relationships among Variables Lurking Variable: A variable not included in the analysis but that affects the result. XY

18 Relationships among Variables Lurking Variable: A variable not included in the analysis but that affects the result. XY Z

19 Relationships among Variables We can examine the strength of relationships graphically. Scatter plots- Show the relationship between two variables when the data are measured on the ordinal, interval, or ratio scales.

20 Relationships among Variables Scatter plot- (Example here) Graphs stolen from fiverthirtyeight.com

21 Relationships among Variables They can also be used to compare across events. 2008 2010

22 Relationships among Variables We can examine the strength of relationships Statistically. Measures of Association-

23 Relationships among Variables We can examine the strength of relationships statistically. Measures of Association- We mentioned the statistic. The workhorse is the correlation coefficient.

24 Correlation Coefficient

25 Relationships among Variables Correlation Coefficient- Tells us the linear association between two variables. Ranges from -1.0 (perfect negative association) to +1.0 a perfect positive association. Abbreviated as ‘r’ Answers the question: How far are the points (on average) from the line.

26 Relationships among Variables

27 As a general rule: r=.7 is a very strong relationship r=.5 is a strong relationship r=.3 is a weak relationship r=0 is no relationship But it varies depending on how noisy the data are.

28 Relationships among Variables Weaknesses: Does not tell us the magnitude. Example: correlation between education and income =.8. Should you get an MBA? How can we account for intervening variables?

29 Regression Tells us not only the direction but the precise strength of the relationship. How much increasing one variable changes another variable.

30 Regression To clarify this concept we need to be more precise in how we define our variables. Dependent Variable (Y)- The thing being explained. Usually the phenomenon we are interested in studying. Think of this as “the effect”

31 Regression Independent Variable (X)- The thing that affects what we seek to explain. In overly simplistic terms think of this as “the cause” or “an influence”

32 Regression Regression tells us in precise terms the strength of the relationship between the dependent and independent variable. How? By fitting a line through the data that provides the least squared errors.

33 Regression OLS picks the line with the smallest sum of squared errors.

34 Regression Regression is a process that evaluates the relationship between two variables and selects the line that minimizes the sum of squared errors around the line. Where: Y = dependent variable = intercept =slope X= independent variable e = residual

35 Regression The relationship between the independent and dependent variable is summarized by the regression coefficient which tells us the angle or slope of the line.

36 Regression Regression Coefficient (b)- Tells us how much a one unit change in X causes in Y. This is called the slope which reflects the angle of the line.

37 Regression Intercept- Tells us what the value of Y is when X is zero. Also called the constant.

38 Regression R squared (R 2 ) Tells us how much of the variation in the dependent variable (Y) our model explained. Its how well a line fits or describes the data. Ranges from 0 to 100%.

39 Regression What is the relationship between the vote for Ross Perot and Bob Dole in Florida in 1996? ALWAYS begin by graphing your data.

40 Regression OLS picks the line with the smallest sum of squared errors.

41 Regression Intercept Slope (b)R2R2

42 Regression: Interpretation Three main results: 1.The slope: for every additional vote Bob Dole received, Perot got.18 more votes, or for every 100 for Dole, Perot got 18. 2.Where Dole got no votes, we expect Perot to get 1055. 3.The model explains about 84% of the variation in Ross Perot’s vote.

43 Regression Slope (b)=.184 Intercept =1055 R 2 =.843

44 High vs. Low R 2 R 2 =.06 R 2 =.45 Used to compare how well different models explain the data. Higher R 2 indicates a better fit.

45 Regression Standard Error To this point, we have assumed that we know the standard deviation of the population. In practice, we seldom know this, so we estimate it (just as we estimate the population mean). This estimate is called the standard error.

46 Regression We estimate a standard error for the slope and the intercept and use it like we did the standard deviation—to perform significance tests.

47 Regression Here we are conducting a significance test of whether the observed slope and intercept differ from the null hypothesis. This statistic follows a T distribution which accounts for the additional uncertainty that comes from estimating the standard error.

48 Regression Intercept Slope (b)R2R2 Standard Error

49 Regression We can simply plug the figures in from the Stata output. This statistic follows a T distribution which accounts for the additional uncertainty that comes from estimating the standard error. (See inside back cover of M&M). To determine the degrees of freedom subtract the number of variables (2) in the model from the number of observations (67). This is abbreviated as n-k.

50 Regression We can simply plug the figures in from the Stata output. Or, we can use our rule of thumb—if the T statistic is greater than 2 it is significant!

51 Regression If we look to the table we see that the p value corresponds to less than.0000 or less than 1 time in 10,000 would we see a result as big as.184 if the true value were zero. Stata also reports these p values for us.

52 Regression: Interpreting the Rest T StatsP Value F Statistic- all variables=0

53 Regression: Residuals OK—lets interpret this.

54 Regression: Residuals Next Up: Residuals Regression assumptions Multiple regression.


Download ppt "POSC 202A: Lecture 12/10 Announcements: “Lab” Tomorrow; Final emailed out tomorrow or Friday. I will make it due Wed, 5pm. Aren’t I tender? Lecture: Substantive."

Similar presentations


Ads by Google