Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Logistic regression. 2 Regression Regression is a set of techniques for exploiting the presence of statistical ASSOCIATIONS among variables to make.

Similar presentations

Presentation on theme: "1 Logistic regression. 2 Regression Regression is a set of techniques for exploiting the presence of statistical ASSOCIATIONS among variables to make."— Presentation transcript:

1 1 Logistic regression

2 2 Regression Regression is a set of techniques for exploiting the presence of statistical ASSOCIATIONS among variables to make predictions of values of one variable (the DV, TARGET or CRITERION) from knowledge of the values of other variables (the IVs or REGRESSORS).

3 3 Simple and multiple regression In SIMPLE regression, there is just one IV. In MULTIPLE regression, there are two or more IVs.

4 4 Simple regression

5 5 An example In a study of the effects of media violence, children were measured on their Actual violence and their Exposure to screen violence. Here is a scatterplot of Actual violence against Exposure. Each point represents the scores of an individual child.

6 6

7 7 The regression line is drawn through the points

8 8

9 9 The best-fitting line The regression line of Actual violence upon Exposure is the best-fitting line according to what is known as the LEAST SQUARES criterion.

10 10 General form of the simple regression equation

11 11 Estimates For a given value of the independent variable X, the corresponding point on the regression line Y / serves as an ESTIMATE of the true value of the target variable Y. We know the true value of Y in this particular data set; but we are really interested in the question of how well the Actual violence of children IN GENERAL can be predicted from knowledge of their exposure to media violence.

12 12 Parameters A question about children IN GENERAL is a question about the characteristics of a POPULATION, not those of one particular data set. Such questions are about PARAMETERS, not STATISTICS.

13 13 Regression parameters To be in a position to predict values of Y from X in the future, we assume that, in the population, there is a linear relationship between Actual violence and Exposure. The value of the slope and intercept we have calculated from our own data are ESTIMATES of the corresponding population parameters.

14 14 Johns scores John scored 9 on Exposure and 8 on Actual. Johns predicted score from regression Y / is the point on the line above the value 9 on the x-axis. The error in prediction is Y – Y /, a quantity known as the RESIDUAL score e. Johns residual score is shown.

15 15

16 16 Goodness-of-fit: The LEAST-SQUARES criterion

17 17

18 18 Ordinary least-squares (OLS) regression This approach to regression is known as ORDINARY LEAST SQUARES (OLS) regression. There are other kinds of regression (such as LOGISTIC REGRESSION, todays topic) that do not work in this way.

19 19 Regression and correlation Regression and correlation are two sides of the same associative coin. The stronger the association, the narrower will be the elliptical scatterplot, the higher will be the value of the correlation coefficient and the smaller will be the residuals from regression. THE CORRELATION AND THE REGRESSION COEFFICIENT ALWAYS HAVE THE SAME SIGN. For fixed values of the variances of X and Y, the greater the value of r, the steeper will be the slope of the regression line, i.e., the greater will be the value of b 1. The slope of the regression line b 1 and r are related according to …

20 20

21 21 The coefficient of determination (r 2 ) The square of the Pearson correlation is known as the COEFFICIENT OF DETERMINATION. It is so-called because r 2 is the proportion of the variance of Y that is accounted for by regression upon X.

22 22 Coefficient of determination

23 23 Prediction without regression Suppose you know nothing of the association between X and Y. But you are told that the mean of the target variable Y has a certain value M Y. You are asked to predict values of Y for various values of X. It can be shown that your best strategy is to guess the value of M Y, irrespective of the value of X. This is termed INTERCEPT-ONLY prediction.

24 24 A baseline model In multiple regression and several other related techniques, the first step is to formulate a baseline model, which takes no account of any association among the variables. The baseline model is the equivalent of guessing the mean every time. This is Step 0 in several SPSS regression and modelling routines. Step 0 provides a comparison for the evaluation of later models that include one or more of the IVs.

25 25 Two or more IVs: multiple regression We could try to predict a persons actual violence not only from exposure to screen violence, but also from additional variables, such as number of years of education and other characteristics of the parents. We should then have to determine the relative importance of the various IVs and whether we needed to include all of them in the regression model. These are problems in MULTIPLE REGRESSION.

26 26 Multiple regression

27 27 Partial regression coefficients In multiple regression, a PARTIAL REGRESSION COEFFICIENT is the estimated average change in the DV resulting from an increase of one unit in one particular IV with ALL THE OTHER IVs HELD CONSTANT.

28 28 The multiple correlation coefficient R The MULTIPLE CORRELATION R is the correlation between the target variable Y and the corresponding predictions of Y from regression Y /.

29 29 R can never be negative The ABSOLUTE value of the correlation r is unchanged by linearly transforming either or both of the variables involved. So if the correlation between Y and X is +0.6, so is the correlation between Y and 3X + 4. The correlation between Y and –3X + 4 is –0.6. If, on the other hand, the correlation between Y and X is –0.6, the correlation between Y and –3X + 4 is IF THE TRANSFORMATION HAS A NEGATIVE SLOPE, THE SIGN OF THE ORIGINAL CORRELATION CHANGES. If a correlation is negative, so also is the slope of the regression line, with the result that the correlation between Y and Y / is positive.

30 30 Coefficient of determination in multiple regression In multiple regression, the COEFFICIENT OF DETERMINATION is the square of the multiple correlation coefficient.

31 31 The case of one IV The multiple correlation coefficient is defined even in simple regression, where there is only one IV. Here, remembering that R can never be negative, it takes the absolute value of the Pearson correlation between X and Y, even when that has a negative value.

32 32 The coefficient of multiple determination R 2 In multiple regression, the coefficient of determination, the proportion of variance of the target variable Y that is accounted for by regression, is R 2, the square of the multiple correlation coefficient.

33 33

34 34 What if the DV is a set of categories? Simple and multiple OLS regression assume that the DV and IVs consist of measures on an independent scale with units. The term CONTINUOUS VARIABLE is used for this sort of DV. But suppose we want to predict whether a person will suffer from a heart attack or contract a certain illness with known risk factors. Here, we are not predicting a VALUE, but membership of a CATEGORY.

35 35 Category prediction: the OLS approach You are trying to predict the presence or absence of a blood condition, which is thought to be made more likely by smoking and alcohol consumption Why not let 0 = Condition Absent; let 1 = Condition Present and calculate the usual OLS multiple regression equation?

36 36 Problems There are serious problems with running OLS regression when the DV is a set of categories. None of the proposed solutions is entirely satisfactory. There are better approaches.

37 37 Regression with a categorical DV The 2 most commonly used techniques are: 1.Logistic regression 2.Discriminant analysis

38 38 Discriminant analysis If all (or most) IVs are continuous, you might consider using DISCRIMINANT ANALYSIS (DA). But the DA model makes assumptions about the distributions of the IVs (such as multivariate normality) which data sets often fail to satisfy. Moreover, DA doesnt like qualitative IVs, such as sex or nationality.

39 39 Logistic regression Logistic regression makes fewer assumptions than discriminant analysis. Logistic regression, moreover, is happy with qualitative IVs; in fact, logistic regression is happy even if ALL the IVs are qualitative.

40 40 Logistic regression… It is suspected that smoking and drinking are risk factors in the incidence of a pre- morbid blood condition, characterised by the presence of an antibody. Records of the incidence of the condition in 100 patients are available, together with estimates of the amount they smoke and drink.

41 41 First, explore your data. Lets find out how many of the patients have the condition.

42 42

43 43

44 44 Forty-four patients have the condition

45 45 Step 0 in logistic regression We know that 44/100 people have the condition. Armed only with this fact, and with no knowledge of any associations there might be among the variables, we shall maximise our hit rate if we predict ABSENCE of the condition for ANY person selected at random. This, in logistic regression, is the equivalent of no-regression prediction in OLS regression: you just guess M Y, whatever the value of X.

46 46 Here is the logistic regression output for Step 0

47 47

48 48 The model proper assumes … Either you have the disease or you dont. As smoking and alcohol increase, however, we assume that the probability of developing the condition increases CONTINUOUSLY as a function of the IVs. In logistic regression, we estimate the probability of the condition with the LOGISTIC REGRESSION FUNCTION If the estimated probability exceeds a cut-off (usually 0.5), the case is classified by the program as a Yes, rather than a No.

49 49 A logistic regression function

50 50

51 51 The odds Last week, I discussed the ODDS. In an EXPERIMENT OF CHANCE (tossing a coin, rolling a die) the ODDS in favour of an event is the number of ways in which the event could occur, divided by the number of ways in which it could fail to occur.

52 52 The odds … Suppose we know that out of 100 people, 44 have a certain antibody in their blood. We select a person at random from this group. The ODDS in favour of the person having the antibody are 44 to 56 or 44/56.

53 53 The log odds (logit) The odds measure suffers from ASYMMETRY OF RANGE. Unlikely events have odds between 0 and 1; likely events can have huge odds. The LOG ODDS (LOGIT) is the natural logarithm of the odds. Logit = ln(odds) = log e (odds).

54 54 When the logit is zero Suppose the odds were 50 to 50 (50/50 =1). Since the log of 1 is zero, a logit of zero means that the odds for are equal to the odds against.

55 55 Range of the logit The logit has a symmetrical range: a positive sign means the odds are in favour; a negative sign means the odds are against. The logit has no upper or lower limit: it has an unlimited range of values.

56 56 Example The odds in favour of a case having the antibody are 44/56 = 11/14. Logit = ln(11/14) = –.24 The event is less likely than not. If the odds in favour were 56/44, the logit would be ln(56/44) = Notice the symmetry of the scale of magnitude around the neutral point at 1.

57 57 Probability A probability is a measure of likelihood ranging from 0 (an impossibility) to 1 (a certainty). The probability p of an event is the number of ways it can happen divided by the total number of outcomes. The probability of a six when a die is rolled is 1/6.

58 58 Relationship between p and odds A probability and the odds are both measures of likelihood. They are related according to the equation on the left.

59 59 Antilogs We can write any finite real number as an ANTILOG, that is, as the BASE raised to the power of the LOG.

60 The antilog

61 61

62 62 Now, by substituting, we have the logistic regression function.

63 63 Logistic regression function

64 64 The logit equation

65 65 The logistic regression equation

66 66 The problem In the logit equation, we must find values for the intercept and the regression coefficients such that the accuracy of assignment of cases to categories of the dependent variable is maximised.

67 67 No mathematical solution In logistic regression, there is no equivalent of the formulae for the intercept and coefficients in OLS regression. A brute force computing algorithm is used whereby, starting at arbitrary values of the coefficients, the values are progressively adjusted to try to arrive at a set which maximises the likelihood of obtaining the observed frequencies. In a process known as ITERATION, estimates of the parameters are calculated again and again in the hope that they will converge to stable values. IT DOESNT ALWAYS HAPPEN! We must therefore check that this convergence really has been achieved by examining the ITERATION HISTORY in the SPSS output.

68 68 Potential difficulties The algorithm will not run successfully if the IVs are too highly correlated. This is the familiar MULTICOLLINEARITY PROBLEM we encountered in OLS regression.

69 69 Centring As with OLS multiple regression, it is a good idea to CENTRE variables, by subtracting the mean from each score. This move makes the algorithm more robust to substantial correlations among the variables.

70 70 Attributing causality The IVs are likely to be correlated. As with any multiple regression, when the IVs are correlated, it can be difficult to attribute the DV (category membership in this case) unequivocally to any one IV. Moreover, should the battery of IVs be changed, the whole picture may change.

71 71 The meaning of a logistic regression coefficient The regression coefficient is the increase in the logit in favour of an individual having the condition produced by an increment of one unit in the IV. Suppose that for Smoking, b = 1.1. An increase of one smoking unit (eg 10 cigarettes) increases the logit (the log odds) by 1.1.

72 72 Regression coefficients … In terms of the ODDS, an increase of one unit in the IV MULTIPLIES the original odds by the ANTILOG of b, that is, by e b, or exp(b). Exp(1.1) = 3.0 So an increase of one smoking unit results in the odds being MULTIPLIED by 3, that is, the event is THREE times as likely to happen.

73 73 Are the IVs in our data set closely associated?

74 74 Explore the data first.

75 75

76 76 Observations Theres a substantial correlation between at least one of the IVs and the DV. Good. Theres little association between the IVs. Very good. On the other hand, there is little association between Alcohol and the DV, which is bad. A logistic regression is feasible.

77 77

78 78 Covariates In SPSS logistic regression dialogs, IVs that are continuous variables are known as COVARIATES.

79 79 Always ask for the ITERATION HISTORY, so that you can check whether the algorithm was able to arrive at a stable estimate.

80 80 Dire warning! Should the iteration history show failure to converge, the results of the analysis can be ridiculous! The effects of failure to converge are not limited to the IV concerned: they can mess up the whole analysis!

81 81

82 82

83 83 Step 0: the no-regression, intercept-only model

84 84

85 85 The iteration history

86 86

87 87 Fitting a model The goodness-of-fit of a model is measured by a LOG LIKELIHOOD statistic LL. Its value is multiplied by 2 to obtain a chi- square statistic.

88 88 The Nagelkerke R 2 statistic The Nagelkerke statistic is the counterpart of the coefficient of determination R 2 in OLS multiple regression. It is a measure of the proportion of the total variation in incidence of the blood condition accounted for by regression.

89 89

90 90

91 91

92 92 Hit rate using the regression model. This is obviously much better than the no-regression hit rate of 56%. A regression model is now applied.

93 93 The Wald statistic The WALD STATISTIC tests a regression coefficient for significance. The null hypothesis is that, in the population, the coefficient is zero. The Wald statistic is B 2 /SE 2 (not B/SE as Andy Field says on page 224) and is distributed as chi-square.

94 94 This is the antilog of the coefficient of Smoking in the logit equation. Increasing Smoking by one unit MULTIPLIES the odds in favour of occurrence by about 10.

95 95

96 96

97 97

98 98 Summary The incidence of the blood condition is indeed predictable from regression and raises the hit rate from 54% to 85%. Smoking contributes significantly to the model. Alcohol does not contribute significantly to the model.

99 99 The next step This session has been merely an introduction to the technique of logistic regression. The next step is to do some further reading.

100 100 Getting started Theres an elementary section on logistic regression in –Kinnear, P., & Gray, C. (2007). SPSS14 made simple. Hove: Psychology Press. Chapter 14. This is mainly a practical, get-started guide; but there is an outline of the rationale of the technique as well.

101 101 An excellent textbook Howell, D. C. (2007). Statistical methods for psychology (6 th ed.). Belmont, CA: Thomson/Wadsworth. Theres a helpful introduction to logistic regression in Chapter 15, the multiple regression chapter.

102 102 Sage paperbacks Menard, S. (2002). Applied logistic regression analysis (2 nd ed.). London: Sage. Jaccard, J. (2001). Interaction effects in logistic regression. London: Sage.

103 103 Tabachnik, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5 th ed.). Boston: Allyn & Bacon. Chapter 10. Field, A. (2005). Discovering statistics using SPSS for Windows: Advanced techniques for the beginner (2 nd ed.). London: Sage. Chapter 6.

Download ppt "1 Logistic regression. 2 Regression Regression is a set of techniques for exploiting the presence of statistical ASSOCIATIONS among variables to make."

Similar presentations

Ads by Google