Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sociology 601, Class17: October 27, 2009 Linear relationships. A & F, chapter 9.1 Least squares estimation. A & F 9.2 The linear regression model (9.3)

Similar presentations


Presentation on theme: "Sociology 601, Class17: October 27, 2009 Linear relationships. A & F, chapter 9.1 Least squares estimation. A & F 9.2 The linear regression model (9.3)"— Presentation transcript:

1 Sociology 601, Class17: October 27, 2009 Linear relationships. A & F, chapter 9.1 Least squares estimation. A & F 9.2 The linear regression model (9.3) 1

2 Example of a linear Relationship 2

3 Equation for a linear relationship A linear relationship is a relationship between two variables Y and X that can be defined by the equation: Y =  +  X Y is the value for the response variable X is the value for the explanatory variable  is the Y-intercept  is the slope 3

4 Example of a linear relationship Change over time in attitudes about gender Y = α +βX Y (the response variable) = % disagree that men make better politicians than women X (the explanatory variable) = year of survey α(Y-intercept) = value of y when x=0 β(the slope) = change in y per unit of x 4

5 Example of a linear relationship Change over time in attitudes about gender Y hat = a +bX = -25.62 + 0.013*year yearobservedpredicted 197449.2%47.6% 197548.3%49.0% 197747.2%51.6% 197854.5%52.9% 198259.2%58.2% 198361.1%59.5% 198558.6%62.2% 198660.5%63.5% 198865.3%66.1% 198964.7%67.5% 199069.4%68.8% 199171.1%70.1% 199374.4%72.7% 199474.6%74.1% 199674.9%76.7% 5

6 The Dangers of Extrapolation Change over time in attitudes about gender Y hat = a +bX = -25.62 + 0.013*year yearobservedpredicted 197449.2%47.6% 199674.9%76.7% 200870.0%92.6% 2020?108.4% 6

7 Example of a linear Relationship 7

8 Key terms for linear relationships Explanatory variable: a variable that we think of as explaining or “causing” the value of another variable. (also called the independent variable) We reserve X to denote the explanatory variable Response variable: a variable that we think of as being explained or “caused” by the value of another variable. (also called the dependent variable) We reserve Y to denote the response variable (Q: what happens if both variables explain each other?) 8

9 More key terms for linear relationships  : the slope of a linear relationship  : the increment in y per one unit of x o If  > 0, the relationship between the explanatory and response variables is positive. o If  < 0, the relationship between the explanatory and response variables is negative. o If  = 0, the explanatory and response variables are said to be independent. if x is multiplied by 12 (e.g., months rather than years), then  ’ = ? if x is divided by 10 (e.g., decades rather than years), then  ’ = ? if y is multiplied by 100 (e.g., percentage points rather than proportion), then  ’ = ? if you subtract 1974 from x, then  ’ = ? 9

10 More key terms for linear relationships  : the y-intercept of a linear relationship  is the value of y when x = 0. o this is sometimes a meaningless value of x way beyond its observed range.  : determines the height of the line up or down on the y-axis if x is multiplied by 12 (e.g., months rather than years), then  ’ = ? if x is divided by 10 (e.g., decades rather than years), then  ’ = ? if y is multiplied by 100 (e.g., percentage points rather than proportion), then  ’ = ? if you subtract 1974 from x, then  ’ = ? (note:  and  are both population parameters like  ) 10

11 More key terms for linear relationships model: a formula that provides a simple approximation for the relationship between variables. The linear function is the simplest model for a relationship between two interval scale variables. Regression analysis: using linear models to study… o the form of a relationship between variables o the strength of a relationship between variables o whether a statistically significant relationship exists between variables 11

12 Another Example of a linear Relationship 12

13 9.2 Predicting Y-scores using least squares regression Next, we study relationships between two variables where there are multiple cases of X, and Y scores do not always line up on a straight line. There is some scatter to the data points. The objective is still to predict a value of Y, given a value of X. 13

14 Linear prediction: an example. Chaves, M. and D.E. Cann. 1992. “Regulation, Pluralism, and Religious Market Structure.” Rationality and Society 4(3): 272-290. observations for 18 countries outcome var: weekly percent attending religious services variable name – “attend” explanatory var: level of state regulation of religion variable name – “regul” (not really interval scale), ordinal ranking 0-6 14

15 Plotting a linear relationship in STATA. plot attend regul 82 + | * | a | t | e | n | * * d | | | * * | * * * | | * * | 3 + * * +----------------------------------------------------------------+ 0 regul 6 15

16 Solving a least squares regression, using STATA. regress attend regul Source | SS df MS Number of obs = 18 -------------+------------------------------ F( 1, 16) = 9.65 Model | 2240.05128 1 2240.05128 Prob > F = 0.0068 Residual | 3715.94872 16 232.246795 R-squared = 0.3761 -------------+------------------------------ Adj R-squared = 0.3371 Total | 5956 17 350.352941 Root MSE = 15.24 ------------------------------------------------------------------------------ attend | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- regul | -5.358974 1.72555 -3.11 0.007 -9.016977 -1.700972 _cons | 36.83761 5.395698 6.83 0.000 25.39924 48.27598 ------------------------------------------------------------------------------ b is the coefficient for “regul”. a is the coefficient for “_cons”. (ignore all the other output for now) %attend = 36.8 - 5.4 * regul 16

17 Finding the predicted values of religious attendance (y hat) for each observed level of regulation (x) regulcalculationy hat = predicted attendance 036.8 – 5.4*036.8% 136.8 – 5.4*131.5% 236.8 – 5.4*226.1% 336.8 – 5.4*320.8% 436.8 – 5.4*415.4% 536.8 – 5.4*510.0% 636.8 – 5.4*64.7% %attend = 36.8 - 5.4 * regul 17

18 Finding the predicted values for each observed level of X, using STATA. predict pattend (option xb assumed; fitted values). tabulate pattend regul Fitted | regul values | 0 1 2 3 5 6| Total -----------+-----------------------------------------------------+-------- 4.683761 | 0 0 0 0 0 2| 2 10.04274 | 0 0 0 0 2 0| 2 20.76068 | 0 0 0 5 0 0| 5 26.11966 | 0 0 2 0 0 0| 2 31.47863 | 0 1 0 0 0 0| 1 36.83761 | 6 0 0 0 0 0| 6 -----------+-----------------------------------------------------+-------- Total | 6 1 2 5 2 | 18 you can use the predict command only after you have used regress to estimate the regression function. 18

19 Plotting the predicted values for each observed level of X - in STATA. plot pattend regul 36.8376 + | * | F | i | * t | e | * d | | v | * a | l | u | e | s | | * | 4.68376 + * +----------------------------------------------------------------+ 0 regul 6 19

20 Interpreting a regression When the data are scattered, we need to ask two questions: o Is the data suitable for a linear model? o If so, how do we draw a line through it? Checking suitability (i.e,. assumptions) scattergrams crosstabs (including means and sd’s by x-levels) The assumptions of a linear regression are violated if o the plot / crosstab suggests a nonlinear relationship o there are severe outliers (extreme x or y scores) o there is evidence of heteroskedasticity (the amount of “scatter” of the dots depends on the x- score) 20

21 Possible prediction methods Once you have decided that a linear model is appropriate, how do you choose a linear equation with a scattered mess of dots? A.) Calculate a slope from any two points? B.) Calculate the average slope of all the points (with the least error)? C.) Calculate the slope with the least squared error)? All these solutions may be technically unbiased, but C. is generally accepted as the most efficient. (C gives a slope that is, on average, closest to the slope of the population.) 21

22 Least squares prediction: formal terms population equation for a linear model: Y =  +  X + ε equation for a given observation: Y i = a + bX i + e i where Y i and X i are observed values of Y and X, and e i is the error in observation Y i. prediction for a given value of X, based on a sample: Y hat = a + bX, where Y hat is the predicted value of Y Note that Y i – Y hat = e i = residual for observation i 22

23 Least squares prediction: equation for b goal for a given sample: estimate b and a such that  (Y i – Y hat ) 2 is as small as possible. (To derive the solution: start with Q =  (Y i – a - bX i ) 2, take partial differentials of Q with respect to a and b, and solve for relative minima. This will not be tested in class!) solution: 23

24 Least squares prediction: more terms  (Y i – Y hat ) 2 is also called the sum of squared errors or SSE. (Also called the residual sum of squares, the squared errors in the response variable left over after you control for variation due to the explanatory variable.) The method that calculates b and a to produce the smallest possible SSE is the method of least squares. b and a are least squares estimates The prediction line Y hat = a + bX is the least squares line 24

25 Least squares prediction: still more terms For a given observation, the prediction error e i (Y i – Y hat ) is called the residual. An atypical X or Y score or a large residual can be called an outlier. o outliers can bias an estimate of a slope o outliers can increase the possibility of a type I error of inference. o outlier Y scores are especially troublesome when they are associated with extreme values of X. o outliers sometimes belong in the data, sometimes not. o Q: DC homicide rates? 25

26 Calculating the residuals for each observation, using STATA. predict rattend, residuals. summarize attend pattend rattend if country=="Ireland" Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- attend | 1 82. 82 82 pattend | 1 36.83761. 36.83761 36.83761 rattend | 1 45.16239. 45.16239 45.16239 reminder: you can only use the predict command after you have used regress to estimate the regression function. 26

27 Plotting the residuals for each observed level of X, using STATA. plot rattend regul 45.1624 + | | * | R | e | s | i | d | * u | * a | * l | * s | * | * * | * * * | * -19.4786 + * * +----------------------------------------------------------------+ 0 regul do you notice the residual that is an outlier? 27

28 More on Sums of Squares: Sum of Squares refers to the act of taking each ‘error’, squaring it, and adding it to all the other errors in the sample. This operation is analogous to calculating a variance, without dividing by n- 1. Sum of Squares Total (SST) refers to the difference between a score y i and the overall mean Y bar.  (Y i – Y bar ) 2 Sum of Squares Error (SSE), also called Sum of Squares Residual (SSR), refers to the difference between a score y i and the corresponding prediction from the regression line Y hat.  (Y i – Y hat ) 2 28

29 9.3 the linear regression model The conceptual problem: The linear model Y =  +  X has limited use because it is deterministic and cannot account for variability in Y-values for observations with the same X-value. The conceptual solution: The linear regression model E(Y) =  +  X is a probabilistic model more suited to the variable data in social science research. A regression function describes how the mean of the response variable changes according to the value of an explanatory variable. For example, we don’t expect qll college graduates to earn more than all high school graduates, but we expect the mean earnings of college graduates to be greater than the mean earnings of high school graduates. 29

30 A standard deviation for the linear regression model A new problem: How do we describe variation about the means of a regression line? A solution: The conditional standard deviation  refers to variability of Y values about the conditional population mean E(Y) =  +  X for subjects with the same value of X. Q: why n-2? 30

31 The linear regression model: example of conditional standard deviation Church attendance and state control problem: SSE (also called SSR) = 3715.9 n = 18, n-2 = 16 31

32 Solving a least squares regression, using STATA. regress attend regul Source | SS df MS Number of obs = 18 -------------+------------------------------ F( 1, 16) = 9.65 Model | 2240.05128 1 2240.05128 Prob > F = 0.0068 Residual | 3715.94872 16 232.246795 R-squared = 0.3761 -------------+------------------------------ Adj R-squared = 0.3371 Total | 5956 17 350.352941 Root MSE = 15.24 ------------------------------------------------------------------------------ attend | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- regul | -5.358974 1.72555 -3.11 0.007 -9.016977 -1.700972 _cons | 36.83761 5.395698 6.83 0.000 25.39924 48.27598 ------------------------------------------------------------------------------ b is the coefficient for “regul”. a is the coefficient for “_cons”. (ignore all the other output for now) %attend = 36.8 - 5.4 * regul 32

33 interpreting the conditional standard deviation Church attendance and state control problem: For every level of state control of religion, the standard deviation for the predicted mean church attendance is 15.24 percentage points. (Draw chart on board) By assumptions of the regression model, this is true for every level of state control. (Is that assumption valid in this case?) 33

34 Conditional standard deviation and Marginal standard deviation Degrees of freedom are different E(Y) is different: Y bar versus Y hat Conditional s.d. is usually smaller than marginal. 34


Download ppt "Sociology 601, Class17: October 27, 2009 Linear relationships. A & F, chapter 9.1 Least squares estimation. A & F 9.2 The linear regression model (9.3)"

Similar presentations


Ads by Google