MRes 3rd March 2010 Logistic regression

Slides:



Advertisements
Similar presentations
1 Radio Maria World. 2 Postazioni Transmitter locations.
Advertisements

The Fall Messier Marathon Guide
Números.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
AGVISE Laboratories %Zone or Grid Samples – Northwood laboratory
Trend for Precision Soil Testing % Zone or Grid Samples Tested compared to Total Samples.
5.1 Rules for Exponents Review of Bases and Exponents Zero Exponents

PDAs Accept Context-Free Languages
/ /17 32/ / /
Lecture 8: Hypothesis Testing
Reflection nurulquran.com.
EuroCondens SGB E.
Worksheets.
STATISTICS Linear Statistical Models
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
Addition and Subtraction Equations
By John E. Hopcroft, Rajeev Motwani and Jeffrey D. Ullman
1 When you see… Find the zeros You think…. 2 To find the zeros...
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
Summative Math Test Algebra (28%) Geometry (29%)
Logistic regression.
Logistic regression.
Lecture 7 THE NORMAL AND STANDARD NORMAL DISTRIBUTIONS
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
ASCII stands for American Standard Code for Information Interchange
1 MRes Wednesday 11 th March 2009 Logistic regression.
Chapter 7 Sampling and Sampling Distributions
The 5S numbers game..
突破信息检索壁垒 -SciFinder Scholar 介绍
A Fractional Order (Proportional and Derivative) Motion Controller Design for A Class of Second-order Systems Center for Self-Organizing Intelligent.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Biostatistics Unit 5 Samples Needs to be completed. 12/24/13.
The basics for simulations
Factoring Quadratics — ax² + bx + c Topic
© 2010 Concept Systems, Inc.1 Concept Mapping Methodology: An Example.
Elementary Statistics
MM4A6c: Apply the law of sines and the law of cosines.
Chi-Square and Analysis of Variance (ANOVA)
Figure 3–1 Standard logic symbols for the inverter (ANSI/IEEE Std
Regression with Panel Data
Dynamic Access Control the file server, reimagined Presented by Mark on twitter 1 contents copyright 2013 Mark Minasi.
Statistics Review – Part I
Progressive Aerobic Cardiovascular Endurance Run
Chapter 1: Expressions, Equations, & Inequalities
2.5 Using Linear Models   Month Temp º F 70 º F 75 º F 78 º F.
Slide P- 1. Chapter P Prerequisites P.1 Real Numbers.
Statistical Analysis SC504/HS927 Spring Term 2008
When you see… Find the zeros You think….
2011 WINNISQUAM COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=1021.
Before Between After.
2011 FRANKLIN COMMUNITY SURVEY YOUTH RISK BEHAVIOR GRADES 9-12 STUDENTS=332.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Chapter 10 Correlation and Regression
Static Equilibrium; Elasticity and Fracture
ANALYTICAL GEOMETRY ONE MARK QUESTIONS PREPARED BY:
Resistência dos Materiais, 5ª ed.
Copyright © 2013 Pearson Education, Inc. All rights reserved Chapter 11 Simple Linear Regression.
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
Simple Linear Regression Analysis
Multiple Regression and Model Building
16. Mean Square Estimation
9. Two Functions of Two Random Variables
Patient Survey Results 2013 Nicki Mott. Patient Survey 2013 Patient Survey conducted by IPOS Mori by posting questionnaires to random patients in the.
Chart Deception Main Source: How to Lie with Charts, by Gerald E. Jones Dr. Michael R. Hyman, NMSU.
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Schutzvermerk nach DIN 34 beachten 05/04/15 Seite 1 Training EPAM and CANopen Basic Solution: Password * * Level 1 Level 2 * Level 3 Password2 IP-Adr.
An Introduction to Logistic Regression
Presentation transcript:

MRes 3rd March 2010 Logistic regression

Programme 2pm – 3:15pm. A talk. A break for coffee. 3:45pm – 4:30pm. A short exercise.

Background Logistic regression is a special kind of regression designed for a specific type of situation. To understand logistic regression, however, you must be clear about some of the fundamentals of ORDINARY LEAST SQUARES (OLS) regression. I’ll review those first, before I talk about logistic regression itself.

A study Does watching screened violence promote violent behaviour in children? In a study of the effects of media violence, some children were measured on their Actual violence and on their Exposure to screened violence. Here is a scatterplot of Actual violence against Exposure.

The scatterplot Each point in the plot represents one child. The coordinates of the point are the child’s scores on Exposure to and Actual violence. A strong statistical ASSOCIATION between Exposure to and Actual violence is evident from the elliptical shape of the cloud of points.

A basically linear association

The Pearson correlation In this situation, the strength of an association is measured by the Pearson correlation, the formula for which is:

Regression Regression is a set of statistical techniques enabling the researcher to exploit an association among variables to PREDICT the values of one variable from those of others. From regression, you can also ascertain the extent to which the variance of a target variable can be EXPLAINED or accounted for in terms of the other variables.

Some key terms The variable we are trying to predict or account for is known variously as the DEPENDENT VARIABLE (DV), the CRITERION, or the TARGET VARIABLE. The predictors are known as the INDEPENDENT VARIABLES (IVs) or REGRESSORS. In our current example, the DV is Actual violence and the IV is Exposure to screen violence.

Simple regression and multiple regression In SIMPLE REGRESSION, there is just ONE IV or regressor. In MULTIPLE REGRESSION, there are TWO OR MORE IVs or regressors.

The regression line In simple regression, a line called the REGRESSION LINE is drawn through the points. The regression line is the line that fits the points most closely, according to the LEAST SQUARES CRITERION. The traditional approach to regression is therefore referred to as ORDINARY LEAST SQUARES (OLS) REGRESSION.

Equation of a straight line

In general …

The regression line

The regression equation

On the graph …

Interpretation of slope or regression coefficient The slope or REGRESSION COEFFICIENT is the average number of units of change in the DV that result from a change of one unit on the IV. In our example, slope = .74 . So, an increase of one unit in Exposure produces, on average, an increase of .74 (rather less than one unit) in Actual violence.

Residuals Joe scored 8 on Exposure and 9 on Actual. Joe’s predicted score from regression Y/ is the point on the line above the value 8 on the x-axis. This predicted score is 8. The error in prediction (e ) is (Y – Y/ ), a quantity known as the RESIDUAL score. Joe’s residual score is (9 – 8 = 1), as shown in the following diagram.

The residual (e)

Least squares criterion for goodness-of-fit The values of the slope and intercept of the regression line are such that the sum of the squares of the residuals (SSerror) is a minimum.

A unique solution The values of b and c needed to achieve the least squares criterion are given by the formula below. Clearly, the regression coefficient b is closely related to the Pearson correlation.

Other kinds of regression In ordinary least-squares (OLS) regression, a regression line is fitted, such that the sum of the squares of the residuals is a minimum. There are other kinds of regression (such as LOGISTIC REGRESSION, today’s topic) that do not work in this way.

Regression and correlation Regression and correlation are two sides of the same associative coin. The higher the correlation, the narrower the elliptical cloud of points in the scatterplot. For fixed values of the variances of X and Y, the higher the correlation, the greater will be the value of the regression coefficient.

The violence data

A negative correlation So far, I have considered only positive correlations. Here’s a negative one. Does the number of complaints made against GPs very inversely with the average length of their appointments? The following scatterplot supports this hypothesis.

A strong negative correlation

Relation between the regression coefficient and the correlation coefficient The value of the regression coefficient is directly proportional to the value of the correlation coefficient.

The signs of b and r The regression coefficient and the correlation always have the same sign. For the violence data, both are positive. For the data on GPs’ appointments, both are negative.

Complete independence I take two random samples, each of size 10,000 from a normal population with mean 100 and SD 25. (The syntax for doing this is in an appendix.) Since there should be no association between the two samples, the correlation between them should be zero. The scatterplot will be CIRCULAR. The regression line will be HORIZONTAL, that is, with zero slope.

No association

Intercept-only regression The regression line is horizontal and passes through the value 100 on the y-axis. This is the mean value of the distribution of the dependent variable. Here the intercept of the regression line is equal to the mean value of Y and its slope is zero. When X and Y are independent, you can only predict the mean value of Y whatever the value of X. This is known as INTERCEPT- ONLY REGRESSION.

Model-building When testing the goodness-of-fit of regression models to the data, a useful baseline is provided by the INDEPENDENCE MODEL, which makes intercept-only predictions of the dependent variable by predicting the mean value of the DV whatever the value of the IV. In several computing procedures, this is labelled as STEP 0 in the analysis. A good regression model should be a big improvement upon the independence model.

The coefficient of determination (r2) The square of the Pearson correlation is known as the COEFFICIENT OF DETERMINATION. It is so-called because r2 is the proportion of the variance of Y that is accounted for by regression upon X.

Coefficient of determination

Two or more IVs: multiple regression We could try to predict a child’s actual violence not only from level of exposure to screen violence, but also from additional variables, such as level of parental violence and parental education. We should then have to determine the relative importance of the various IVs and whether we needed to include all of them in the regression model. These are problems in MULTIPLE REGRESSION.

Equations for simple and multiple regression In the multiple regression equation, c is the CONSTANT and b1, b2, …,bp are the PARTIAL REGRESSION COEFFICIENTS.

Partial regression coefficients In multiple regression, a PARTIAL REGRESSION COEFFICIENT is the estimated average change in the DV resulting from an increase of one unit in one particular IV with ALL THE OTHER IVs HELD CONSTANT.

The multiple correlation coefficient R The MULTIPLE CORRELATION COEFFICIENT (R) is the correlation between the target variable Y and the corresponding predictions Y/ of Y from regression upon X.

Notation When it is necessary to specify which variables are involved in a multiple regression, a subscript notation is used. The multiple correlation between Y and X1, X2, …, Xp is

Properties of R R can never take a negative value, because the sign of the slope of the regression line is always the same as that of the correlation. Recall that the Pearson correlation can only vary within the range from –1 to +1, inclusive. In contradistinction, R can only take values between zero and +1, inclusive.

The case of one IV The multiple correlation coefficient is defined even in simple regression, where there is only one IV. Here, remembering that R can never be negative, it takes the ABSOLUTE VALUE of the Pearson correlation (r ) between X and Y, even when r has a negative value. So in SPSS, R is included in the output for simple regression.

The coefficient of multiple determination R2 In multiple regression, THE COEFFICIENT OF MULTIPLE DETERMINATION R2 is the proportion of variance of the dependent variable Y that is accounted for by regression upon the IVs.

A spatial representation of the coefficient of multiple determination

What if the DV is a set of categories? Simple and multiple OLS regression assume that all the variables are CONTINUOUS, that is, measures on an independent scale with units. But suppose we want to predict whether a person will suffer from a heart attack or contract a certain illness with known risk factors. Here, we are predicting not a VALUE, but CATEGORY MEMBERSHIP.

Regression with a categorical DV The two most commonly used techniques are: Logistic regression Discriminant analysis

Discriminant analysis If all (or most) IVs are continuous, you might consider using DISCRIMINANT ANALYSIS (DA). But the DA model makes assumptions about the distributions of the IVs (such as multivariate normality) which research data often fail to satisfy. Moreover, DA doesn’t like qualitative IVs, such as sex or nationality. For these reasons, logistic regression is increasingly preferred to DA when the DV is categorical.

Categorical IVs Unlike DA, logistic regression is happy with qualitative IVs; in fact, logistic regression is happy even if ALL the IVs are qualitative.

A research question It is suspected that smoking and drinking are risk factors in the incidence of a pre-morbid blood condition, characterised by the presence of an antibody. Records of the incidence of the antibody in 100 patients are available, together with estimates of the amounts that they smoke and drink.

The data

How many of the patients have the antibody?

Use Frequencies

Frequencies dialog

Forty-four of the hundred patients have the antibody

The odds In an EXPERIMENT OF CHANCE (tossing a coin, rolling a die) the ODDS in favour of an event is the number of ways in which the event could occur, divided by the number of ways in which it could fail to occur. If a die is rolled, there is one way of getting a six and there are five ways of not getting a six. The odds in favour of a six are therefore 1/5.

Odds in favour of having the antibody We know that out of 100 patients, 44 have the antibody. We select a person at random from this group. There are 44 ways of selecting a person with the antibody; and 56 ways of selecting someone without it. The odds in favour of the person having the antibody are 44/56 = .79.

Probability A probability is a measure of likelihood ranging from 0 (an impossibility) to 1 (a certainty). The classical definition of probability, like that of the odds, also arises in the context of an experiment of chance. The probability p of an event is the number of ways it can happen divided by the TOTAL number of possible outcomes. When a die is rolled, there are six possible outcomes. There is one way of getting a six. The probability of a six when a die is rolled is therefore 1/6.

Relationship between probability and the odds Probability and the odds are both measures of likelihood and have been defined in the same context – an experiment of chance. They are related according to the equation on the left.

Logarithms In a logarithmic system, numbers are expressed as powers (logs) of a constant called the BASE of the system. In COMMON LOGS, the base is 10. In NATURAL LOGS, the base is the mathematical constant e, where e is approximately 2.72 .

Logs and antilogs Before the IT revolution, calculations involving large numbers were done by converting the numbers to logs, working with the logs (which are much smaller numbers), then reversing the log function with the ANTILOG FUNCTION to get back to the original number scale.

The antilog function

Log notation (base 10)

Log notation (base e)

An asymmetrical measure The odds measure suffers from ASYMMETRY OF RANGE. Extremely unlikely events have odds confined between 0 and 1; whereas very likely events can have huge odds running into millions. Two very likely events could be separated by millions in terms of odds; two very unlikely events will be separated by minute fractions.

The log odds or logit The LOG ODDS (LOGIT) is the natural logarithm (log to the base e) of the odds. Logit = ln(Odds) = loge(Odds).

Even Steven Suppose the odds were 50 to 50 (50/50 =1). The natural log of 1 is zero (e0 = 1). So for raw odds of 50 to 50, the logit (log odds) is zero.

Range of the logit The logit has a symmetrical range: a positive sign means the odds are in favour; a negative sign means the odds are against. Unlike the odds, which has a lower limit of zero, the logit has neither an upper nor a lower limit.

Example In our current example, the odds in favour of a case having the antibody are 44/56 = 11/14 = .79 Logit = ln(.79) = –.24 The event is less likely than not, hence the negative sign. If the odds in favour were 56/44, the logit would have been ln(56/44) = ln(1.27) = +.24. Notice the symmetry of the scale of magnitude around the neutral point at 0.

Odds as antilogs A number such as the odds can be written as an ANTILOG, that is, the base e to the power of the natural log of the odds (the logit):

Probability and the logit We can therefore express the probability in terms of the logit, rather than the odds. We shall use the symbol Z for the logit.

The logistic regression function We have arrived at the LOGISTIC REGRESSION FUNCTION, in which Z is the logit or log odds.

Assumptions of logistic regression Either you have the antibody or you don’t. As smoking and alcohol increase, however, the probability of having the antibody is assumed to increase CONTINUOUSLY as a function of the IVs. In logistic regression, we estimate the probability of having the antibody with the LOGISTIC REGRESSION FUNCTION If the estimated probability exceeds a cut-off (usually set at 0.5), the case is classified by the program as a Yes, rather than a No.

A logistic regression function

Logistic regression and logit functions We have seen that the logistic regression function is non-linear. The logit function (Z), however, is assumed to be linear.

The logit equation The logit is assumed to be a linear function Z of the independent variables. Z looks like an OLS linear regression equation, with a constant and partial regression coefficients.

Typical graph of the logit function Z

The decision rule

The log of the product is the sum of the logs Taking antilogs of both sides of the equation shows that the product of the original numbers is the product of the antilogs.

Interpretation of a logistic regression coefficient The partial regression coefficient is the increase in the LOG ODDS or LOGIT (Z ) arising from an increase of one unit in the independent variable. The antilog of the partial regression coefficient is the factor by which the original odds must be MULTIPLIED to give the new odds when the IV increases by a unit.

Interpretation of b A unit increase in Smoking increases Z to Z + b.

Example In terms of the ODDS, an increase of one unit in the IV MULTIPLIES the original odds by the ANTILOG of b, that is, by eb, or exp(b). If b = 1.1, exp(1.1) = 3.0 So an increase of one smoking unit results in the odds being MULTIPLIED by 3, that is, the antibody is THREE times as likely to be present in the blood of those who smoke a unit more.

The regression problem In the logit equation, we must find values of the constant and partial regression coefficients such that correct assignment to categories by the logistic regression function is maximised.

No mathematical solution In logistic regression, there is no equivalent of the formulae for the intercept and coefficients in OLS regression. A ‘brute force’ computing algorithm is used whereby, starting at arbitrary values of the coefficients, the values are progressively adjusted to try to arrive at a set which maximises the likelihood of obtaining the observed frequencies.

Iteration and ‘convergence’ In a process known as ITERATION, estimates of the parameters are calculated again and again in the hope that they will ‘converge’ to stable values. IT DOESN’T ALWAYS HAPPEN! We must therefore check that this ‘convergence’ really has been achieved by examining the ITERATION HISTORY in the SPSS output.

Potential difficulties The algorithm will not run successfully if the IVs are too highly correlated. This is the familiar MULTICOLLINEARITY PROBLEM sometimes encountered in OLS regression.

Centring As with OLS multiple regression, it is a good idea to CENTRE variables, by subtracting the mean from each score, so that the mean of the transformed scores is zero. Centring leaves the correlations among the variables unchanged. But centring makes the algorithm more robust to substantial correlations among the variables.

Finding binary logistic regression

Covariates In SPSS logistic regression dialogs, IVs that are continuous variables are known as COVARIATES.

Always ask for the ITERATION HISTORY, so that you can check whether the algorithm was able to arrive at a stable estimate.

Dire warning! Should the iteration history show failure to converge, the results of the analysis can be ridiculous! The effects of failure to converge are not limited to the IV concerned: they can mess up the whole analysis!

The logistic regression dialog

The Options dialog

Fitting a model The goodness-of-fit of a model is measured by a log likelihood chi-square statistic. The SMALLER the value of chi-square, the BETTER the fit. The LARGER the p -value the better.

Step 0 in logistic regression We know that 44/100 people have the condition. Armed only with this fact, and with no knowledge of any associations there might be among the variables, we shall maximise our hit rate if we predict ABSENCE of the condition for ANY person selected at random. This is the equivalent, in logistic regression, of intercept-only (no-regression) prediction in OLS regression: you just guess MY, whatever the value of X.

Here is the logistic regression output for Step 0

Classification table at Step 0

The iteration history

The Nagelkerke R2 statistic The Nagelkerke statistic is the counterpart of the coefficient of determination R2 in OLS multiple regression. It is a measure of the proportion of the total variation in incidence of the antibody accounted for by regression.

The Nagelkerke R2 statistic

Cohen’s guidelines

Hosmer and Lemeshow contingency table

Goodness-of-fit test

Classification table at Step 1 (after the regression model has been applied)

The Wald statistic The WALD STATISTIC tests a regression coefficient for significance. The null hypothesis is that, in the population, the coefficient is zero. The Wald statistic is distributed approximately as chi-square on one degree of freedom.

Some regression statistics The Wald statistic confirms that Smoking has an effect (p-value is very small) but Alcohol does not (the p-value is large).

The regression coefficient

The logit equation

The logistic regression function

Graph of accuracy of prediction

Conclusion The incidence of the blood condition is indeed predictable from regression and raises the hit rate from 54% to 85%. Smoking contributes significantly to the model. Alcohol does not contribute significantly to the model.

The next step This session has been merely an introduction to the technique of logistic regression. The next step is to do some further reading.

Getting started There’s an elementary section on logistic regression in Kinnear, P., & Gray, C. (2010). PASW Statistics 17 made simple. Hove: Psychology Press. Chapter 14. This is mainly a practical, get-started guide; but there is an outline of the rationale of the technique as well.

Next stop Dugard, P., Todman, J., & Staines, H. (2010) Approaching multivariate analysis: a practical introduction. (2nd ed.) London & New York: Routledge.

Sage paperbacks Menard, S. (2002). Applied logistic regression analysis (2nd ed.). London: Sage. Jaccard, J. (2001). Interaction effects in logistic regression. London: Sage.

Tabachnik, B. G. , & Fidell, L. S. (2007) Tabachnik, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston: Allyn & Bacon. Chapter 10. Field, A. (2005). Discovering statistics using SPSS for Windows: Advanced techniques for the beginner (2nd ed.). London: Sage. Chapter 6.

Appendix Using syntax to draw random samples from specified populations

Drawing two samples from a normal distribution with mean 100 and SD 15