Business Statistics - QBM117 Statistical inference for regression.
Published byModified over 4 years ago
Presentation on theme: "Business Statistics - QBM117 Statistical inference for regression."— Presentation transcript:
Business Statistics - QBM117 Statistical inference for regression
Objectives w To define the linear model which defines the population of interest. w To explain the required conditions of the error variable. w Regression diagnostics
w we have learnt how to estimate the strength of the relationship between the variables using the correlation ceoefficient; w we have learnt how to estimate the relationship between the variables using the least squares regression line, and w we have learnt to estimate the accuracy of the line for prediction, using the standard error of estimate and the coefficient of determination. In the previous two lectures, we have concentrated on summarising sample bivariate data: We now need to perform statistical inference about the population, from which these samples have been taken, in order to better understand the larger population.
The linear model What is the appropriate population for a simple linear regression problem? Where y = the observed value in the population = the straight line population relationship = the error variable Therefore the least squares regression line estimates the population relationship described by the linear model and, The linear model is the basic assumption required for statistical inference in regression and correlation.
Required conditions of the error variable Similarly, the statistical tests we perform in hypothesis testing will only be valid, is these conditions are satisfied. So what are these conditions? will only provide good estimates for if certain assumptions about the error variable are valid.
w The probability distribution of is normal. w The mean of the distribution is zero ie E( ) = 0 w The variance of , 2 is constant, no matter what the value of x. w The errors associated with any two y values are independent. As a result, the value of the error variable at one point does not affect the value of the error variable at another point. Required conditions of the error variable
Requirements 1, 2, and 3 can be interpreted in another way: For each value of x, y is a normally distributed random variable whose mean is And whose standard deviation is Since the mean depends on x, the expected value is often expressed as The standard deviation however is not influenced by x, because it is constant for all values of x.
X Y E[Y]= 0 + 1 X Asumptions of the Simple Linear Regression Model Identical normal distributions of errors, all centered on the regression line.
w Most departures from the required conditions can be diagnosed by examining the residuals. w Excel allows us to calculate these residuals and apply various graphical techniques to them. w Analysis of the residuals allow us to determine whether the variance of the error variable is constant and whether the errors are independent. w Excel can also generate standardised residuals. The residuals are standardised in the usual way, by subtracting the mean (0 in this case) and dividing by the standard deviation (or its estimate in this case, s ) Regression diagnostics
Non-normality We can check for normality by drawing a histogram of the residuals to see if it appears that the error variable is normally distributed. Since the tests in regression analysis are robust, as long as the histogram at least resembles an approximate bell shape or is not extremely non-normal, it is safe to assume that the normality requirement has been met.
Expectation of zero The use of the method of least squares to find the line of best fit ensures that this will always be the case. We can however observe from the histogram of the residuals, that the residuals are approximately symmetric about a value which is close to zero.
Heteroscedasticity The variance of the error variable, constant. When this requirement is violated, the condition is called heteroscedasticity. is required to be Homoscedasticity refers to the condition when the requirement is satisfied. One method of diagnosing heteroscedasticity is to plot the residuals against the x values or the predicted values of y and look for any change in the spread of the variation of the residuals.
Residual Analysis and Checking for Model Inadequacies
Non-independence of the error term w This requirement states that the values of the error variable must be independent. w If the data are time series data, the errors are often correlated w Error terms which are correlated over time are said to be autocorrelated or serially correlated. w We can often detect autocorrelation if we plot the residuals against the time period. w If a pattern emerges, it is likely that the independence requirement is violated.
Reading for next lecture Read Chapter 18 Sections 18.5 and 18.7 (Chapter 11 Sections 11.5 and 11.7 abridged)