Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.1 Regression Continued… Prediction, Model.

Similar presentations


Presentation on theme: "Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.1 Regression Continued… Prediction, Model."— Presentation transcript:

1 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.1 Regression Continued… Prediction, Model Evaluation, Multivariate Extensions, & ANOVA

2 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.2 Variables of Interest? One (continuous) Variable Methods from Before Midterm… Two Variables Both Continuous Interested in predicting one from another Simple Linear Regression Interested in presence of association Both variables normal Pearson Correlation Not normal Spearman (Rank) Correlation One Continuous, one categorical ANOVA* More than Two Variables Multiple Linear Regression *Note: If categorical variable Is ordinal, Spearman Rank Correlation methods are applicable…

3 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.3 Correlation Review x y Example 1 Example 2 x y y y x x I II IIIIV III II I

4 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.4 Correlation Review x y Example 2 More Correlation! x y y y x x I II IIIIV III II I Example 1

5 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.5  Find distance to new line, y-hat, and to ‘naïve’ guess, y. yiyi xixi Simple Linear Regression Review y x

6 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.6  Find distance to new line, y-hat, and to ‘naïve’ guess, y. yiyi xixi Simple Linear Regression Review y x

7 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.7 Linear Regression Continued… 1.Predicted Values 2.Model Evaluation 3.No longer “simple”… MULTIPLE Linear Regression 4.Parallels to “Analysis of Variance”  aka ANOVA

8 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.8 1. Predicted Values  Last week, we conducted hypothesis tests and CI’s for the slope of our linear regression model.  However, we might also be interested in making an estimate of the mean (and/or individual) value of y for particular values of x.

9 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.9 Predicted Values: Newborn Infant Length Example  Last week, came up with least squares line for mean length of low birth weight babies:  y = length  x = gestation age (weeks)  What is the predicted mean length of infants at 20 weeks? 30 weeks?

10 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.10  We can make a point estimate  Let x = 29 weeks:  Now, interested in a CI around this… Predicted Values: Newborn Infant Length Example

11 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.11 Predicted Values: CIs  Confidence interval for y-hat:  In order to calculate this interval, we need the standard error of y-hat:  Note, we get a different standard error of y-hat for each x.

12 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.12  Notice as x gets further from x, the standard error gets larger (leading to a wider confidence interval)  se(y) at 29 weeks = 0.265 cm Predicted Values: Newborn Infant Length Example

13 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.13  Plugging in x = 29 weeks & se(y) = 0.265.  95% CI for mean length of infant at 29 weeks of gestation is (36.41, 37.45). Predicted Values: Newborn Infant Length Example

14 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.14 Predicted Values: CIs  We can do the same for an individual infant…  Confidence interval for y:  In order to calculate this interval, we need the standard error (always larger than the standard error of y-hat):  Note, we get a different standard error of y for each x.

15 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.15  Again, as x gets further from x, the standard error gets larger (leading to a wider confidence interval)  se(y) at x=29 (an infant at 29 weeks) = 2.661 cm  Much more variability at this level Predicted Values: Newborn Infant Length Example

16 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.16  Plugging in x = 29 weeks & se(y) = 2.661.  Note, point estimate of y = y  95% CI for length of individual infant at 29 weeks of gestation is (31.66, 42.20).  Wider interval - compared to (36.41, 37.45) for y-hat Predicted Values: Newborn Infant Length Example

17 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.17 2. Model Evaluation  Homoscedasticity (Residual plots)  Coefficient of Determination (R 2 )  Just how good does our model fit the data?

18 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.18 Review of Assumptions Assumptions of the linear regression model: 1.The y values are distributed according to a normal distribution with mean and variance that is unknown. 2.The relationship between X and Y is given by the formula:. 3.The y are independent. 4. For every value x the standard deviation of the outcomes y is constant and equal to. This concept is called homoscedasticity.

19 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.19 Model Evaluation: Homoscedasticity x y

20 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.20 Model Evaluation: Homoscedasticity x y  Calculate residual distance for each (x i,y i )  In end, we have n e i ’s

21 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.21  Now we plot each of the e i ’s  Are the residuals increasing or decreasing as the fitted values get larger?  Fairly consistent across y-hats  Look for outliers  If present, may want to remove and refit line Model Evaluation: Homoscedasticity Fitted Values (y-hats) Residuals -4 -2 0 2 4

22 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.22  Example of heteroscedasticity  Increasing variability as fitted values increase Suggests nonlinear relationship…may need to transform Model Evaluation: Homoscedasticity Fitted Values (y-hats) Residuals -4 -2 0 2 4 x y

23 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.23 Model Evaluation: Coefficient of Determination  R 2 is a measure of the relative predictive power of a model.  i.e., the proportion of variability in Y that is explained by the linear regression of Y on X  Pearson correlation coefficient squared  aka r 2 = R 2  Also ranges between 0 and 1  Closer to one = better the model (greater ability to predict)  R 2 = 1 would imply that your regression model provides perfect predictions (all data points lie on least-squares regression line  R 2 = 0.7 would mean 70% of variation in response variable can be explained by preditor(s)

24 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.24  Given R-squared is the Pearson correlation coefficient squared, we can solve…  If x explains none of the variation in y, then = 0 and R 2 = 0. Model Evaluation: Coefficient of Determination

25 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.25  Adjust R-squared = adjusted for number of variables in model  i.e., “punished” for additional variables  Want more parsimonious (simple)  Note that R-squared does NOT tell you if:  The predictor is the true cause of the changes in the dependent variable  CORRELATION ≠ CAUSATION !!!  The correct regression line was used  May be omitting variables (multiple linear regression…) Model Evaluation: Coefficient of Determination

26 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.26 3. Multiple Linear Regression  Extend Simple Model to include more variables  Increase our power to make predictions!  Model is no longer a line, but multidimensional  Outcome = function of many variables  e.g., sex, age, race, smoking status, exercise, education, treatment, genetic factors, etc.

27 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.27  Naïve model  Now, we can assess effect of x 1 on y, while controlling for x 2 (potential “confounder”)  We can continue adding predictors...  We can even add “interaction” terms (i.e., x 1 *x 2 ) Multiple Regression

28 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.28 Multiple Linear Regression  Can incorporate (and control for) many variables  A single (continuous) dependent variable  Multiple independent variables (predictors)  These variables may be of any scale (continuous, nominal, or ordinal)

29 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.29  Indicator (“dummy”) variables are created and used for categorical variables: i.e.  Need # categories-1 “dummy” variables  Analysis of Variance equivalent – will cover later… Multiple Regression Racex1x1 x2x2 x3x3 Caucasian000 Black100 Hispanic010 Asian001

30 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.30 Multiple Regression  Conduct F-test for overall model as before, but now with k and n-k-1 degrees of freedom  k = # of predictors in the model  Conduct t-tests for coefficients of predictors as before, but now with n-k-1 degrees of freedom  Note F-test no longer equivalent to t when k>1

31 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.31 Multiple Regression: Confounding  Multiple regression can estimate the effect of each variable while controlling for (adjusting for) the effects of other (potentially confounding) variables in the model  Confounding occurs when the effect of a variable of interest is distorted when you do not control for the effect of another “confounding” variable.

32 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.32 Multiple Regression: Confounding  For example, not accounting for confounding adjusting for effect of x 2  By definition, a confounding variable is associated with both the dependent variable and the independent variable (predictor of interest - i.e., x 1 )  Does β 1 change in second model?  If yes, then evidence that x 2 is confounding the association between x 1 and the dependent variable

33 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.33 Multiple Regression: Confounding Assume a model of blood pressure, with predictors of alcohol consumption and weight. Weight and alcohol consumption may be associated Weight Alcohol Consumption (confounder for effect of weight on blood pressure) Blood Pressure

34 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.34 Multiple Regression: Effect Modification  Interactions (effect modification) may be investigated  The effect of one variable depends on the level of another variable.

35 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.35 Multiple Regression: Effect Modification Effect of x 1 depends on x 2 : 0  β 1 (non-smoker) 1  β 1 + β 3 (smoker) BP example: If x 1 = weight and x 2 = smoking status, then the effect on your BP of an additional 10 lbs would be different if you were smoker vs. non-smoker. x2x2

36 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.36 Smoker: Non-Smoker: DIFFERENCE = *Difference between smokers and non-smokers dependent on x 1 Multiple Regression: Effect Modification New slope and intercept!

37 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.37 DIFFERENCE in slope and intercept … Multiple Regression: Effect Modification x y

38 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.38 Multiple Regression: Confounding or Effect Modification?  Confounding without Effect Modification:  Overall association of predictor of interest and dependent variable is not the same as it is after stratifying on third variable (“confounder”). However, after stratifying, the association is the same within each stratum.  Effect Modification without Confounding:  Overall association accurately estimates average effect of predictor on dependent variable, but after stratifying on third variable, that effect differs across strata.  Both:  Overall association is not a correct estimate of effect, and different effects across subgroups of third variable.

39 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.39 Multiple Regression How to build a multiple regression model: 1.Examine two-way scatter plots of potential predictors against your dependent variable 2.Those that look associated, evaluate in a simple linear regression model (“univariate” analysis) 3.Pick out significant univariate predictors 4.Use stepwise model building techniques:  Backwards  Forwards  Stepwise  Best subsets

40 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.40 Multiple Regression  Like simple linear regression, models require an assessment of model adequacy and goodness of fit.  Examination of residuals (comparison of observed vs. predicted values).  Coefficient of Determination  Pay attention to adjusted R-squared

41 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.41 Lead Exposure Example (from Rosner B. Fundamentals of Biostatistics. 5 th ed.)  Study of lead exposure on neurological and psychological function in children  Compared mean finger-wrist tapping score (maxfwt), a measure of neurological function, between exposed ( ≥ 40 mg/100 mL) and control children (< 40 mg/100 mL)  Measured in taps per 10 seconds  Already have tools to do this in “naïve” case!  2-sample t-test

42 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.42 Lead Exposure Example (from Rosner B. Fundamentals of Biostatistics. 5 th ed.)  Need a dummy variable for exposure CSCN2 =  With 2-sample T-test, we compared the means of the exposed & controls 1 if child is exposed 0 if child is control

43 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.43 Lead Exposure Example (from Rosner B. Fundamentals of Biostatistics. 5 th ed.)  Now, we can turn this into a simple linear regression model: MAXFWT = α + β x CSCN2 + e  Estimates for each group: Exposed (CSCN2=1): MAXFWT = α + β x 1 = α + β Controls (CSCN2=0): MAXFWT = α + β x 0 = α  β represents difference between groups  One unit increase in CSNC2  Testing β = 0 same as testing if mean difference = 0

44 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.44 Lead Exposure Example As just shown, MAXFWT (exposed) = α + β = 55.095 – 6.658 = 48.437 & MAXFWT (controls) = α + e = 55.095  Mean Difference = -6.658!

45 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.45 Lead Exposure Example  Equivalent to two-sample t-test (w/ equal vars) of H 0 : μ c = μ e t=-3.003  p=0.0034  Slope of -6.658 is equivalent to mean difference between exposed and controls.

46 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.46 Lead Exposure Example  R-squared not strong…  Model doesn’t predict much of the group differences

47 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.47 Lead Exposure Example  What other variables related to neurological- function?  Often strongly related to age and gender  Look at scatterplots of both age and gender vs. MAXFWT, separately. Both show evidence of association… Age (years) MAXFWT Males Females MAXFWT

48 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.48 Lead Exposure Example  Age in years, and sex coded (1=Male, 2=Female)  Both appear to be associated with MAXFWT, age is statistically significant (p=0.0001)

49 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.49 Lead Exposure Example  Our first multiple linear regression model…  Numerator DF = k = 2 (sum of squares regression)  Denominator DF = n-k-1 = 92 (sum of squares error)

50 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.50 Lead Exposure Example  Adjusted multiple linear regression model…  Coefficients for Age and Sex haven’t changed by much  Coefficient for CSNC2 smaller than the crude (naïve) difference  -5.147 from -6.658 taps/10 seconds

51 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.51 Lead Exposure Example  R-squared up to 0.56 (from 0.09 in simple model)  Note: Adjusted R-squared compensates for added complexity in model  Since R-squared will ALWAYS increase as more variables are added, we want to keep things as simple as we can…this takes that into account

52 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.52 Lead Exposure Example  Interpretation: Holding sex and age constant (i.e., male and 10 years), the estimated mean difference between groups is -5.15 taps/10 seconds, with a 95% CI of (-8.23, -2.06).

53 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.53 Other Regression Models  Logistic Regression  Used when the dependent variable is binary  Very common in public health/medical studies (e.g., disease vs. no disease)  Poisson Regression  Used when the dependent variable is a count  (Cox) Proportional Hazards (PH) Regression  Used when the dependent variable is a “event time” with censoring

54 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.54 Variables of Interest? One (continuous) Variable Methods from Before Midterm… Two Variables Both Continuous Interested in predicting one from another Simple Linear Regression Interested in presence of association Both variables normal Pearson Correlation Not normal Spearman (Rank) Correlation One Continuous, one categorical ANOVA* More than Two Variables Multiple Linear Regression *Note: If categorical variable Is ordinal, Rank correlation Methods are applicable… 4. ANOVA

55 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.55 Analysis of Variance  Hypothesis Test for difference in means of k groups  H 0 : μ 1 = μ 2 = μ 3 =…= μ k  H A : At least one pair not equal  Assessing differences in means using VARIANCES  Within-group and between-group variability  If no difference in means, then two types of variability should be equal  Assuming within-group variability is constant across groups  Note, if k=2, then same as two-sample t-test  Only need k-1 “dummy” variables

56 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.56 Analysis of Variance  Parallels methods used for regression when we had one continuous and one categorical variable (with k levels)  In constructing Least-Squares lines, we evaluated how much variability in our response could be explained by our explanatory (predictor) variables vs. left unexplained (residual error)…

57 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.57 The Total Error (SSY) was split into two portions:  Variability explained by the regression (SSR) and  Residual variability (SSE) Analysis of Variance

58 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.58 Similarly, we can think of this as the variability WITHIN and BETWEEN each level of the predictor: Analysis of Variance

59 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.59 Box plots for five levels of an explanatory variable:  Size of boxes (Q1-Q3) reflect “Within-Group” variability  Placement of boxes along y-axis reflect “Between-Group” variability Analysis of Variance

60 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.60 Box plots for five levels of an explanatory variable and total (combined):  Total and y line added – so we can see where groups lie relative to the mean… How much overlap? TOTAL Analysis of Variance A B C D E y

61 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.61 Analysis of Variance Table  k = # of groups (levels of categorical variable)  Remember, using parallel regression methods, only needed k-1 variables for k groups – so now, k-1 and n-k degrees of freedom… *Formerly known as SSR *Formerly known as SSE

62 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.62 Analysis of Variance Table Same test is conducted as we saw with regression, testing the ratio of between-group sum of squares and within-group sum of squares. The larger the between is relative to within, the more likely we are to reject the null hypothesis. *Formerly known as SSR *Formerly known as SSE

63 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.63 Analysis of Variance  If we do reject the null hypothesis of all group means being equal (based on the F- test), then we only know that at least one pair differ  Still need to find where those differences lie  Post-hoc tests (aka Multiple Comparisons)  i.e., Tukey, Bonferroni  Perform two-sample tests, while adjusting to maintain overall α level

64 Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.64 Review Variables of Interest? One (continuous) Variable Methods from Before Midterm… Two Variables Both Continuous Interested in predicting one from another Simple Linear Regression Interested in presence of association Both variables normal Pearson Correlation Not normal Spearman (Rank) Correlation One Continuous, one categorical ANOVA* More than Two Variables Multiple Linear Regression


Download ppt "Introduction to Biostatistics, Harvard Extension School, Spring, 2007 © Scott Evans, Ph.D. and Lynne Peeples, M.S.1 Regression Continued… Prediction, Model."

Similar presentations


Ads by Google