Download presentation

Presentation is loading. Please wait.

Published byJob Small Modified over 2 years ago

2
Introduction: The General Linear Model b b The General Linear Model is a phrase used to indicate a class of statistical models which include simple linear regression analysis. b b Regression is the predominant statistical tool used in the social sciences due to its simplicity and versatility. b b Also called Linear Regression Analysis.

3
Simple Linear Regression: The Basic Mathematical Model b b Regression is based on the concept of the simple proportional relationship - also known as the straight line. b b We can express this idea mathematically! Theoretical aside: All theoretical statements of relationship imply a mathematical theoretical structure. Just because it isn’t explicitly stated doesn’t mean that the math isn’t implicit in the language itself!

4
Simple Linear Relationships b b Alternate Mathematical Notation for the straight line - don’t ask why! 10th Grade Geometry Statistics Literature Econometrics Literature

5
Alternate Mathematical Notation for the Line b b These are all equivalent. We simply have to live with this inconsistency. b b We won’t use the geometric tradition, and so you just need to remember that B 0 and a are both the same thing.

6
Linear Regression: the Linguistic Interpretation b b In general terms, the linear model states that the dependent variable is directly proportional to the value of the independent variable. b b Thus if we state that some variable Y increases in direct proportion to some increase in X, we are stating a specific mathematical model of behavior - the linear model.

7
Linear Regression: A Graphic Interpretation

8
The linear model is represented by a simple picture

9
The Mathematical Interpretation: The Meaning of the Regression Parameters b b a = the intercept the point where the line crosses the Y- axis. (the value of the dependent variable when all of the independent variables = 0) b b b = the slope the increase in the dependent variable per unit change in the independent variable (also known as the 'rise over the run')

10
The Error Term b b Such models do not predict behavior perfectly. b b So we must add a component to adjust or compensate for the errors in prediction. b b Having fully described the linear model, the rest of the semester (as well as several more) will be spent of the error.

11
The Nature of Least Squares Estimation b b There is 1 essential goal and there are 4 important concerns with any OLS Model

12
The 'Goal' of Ordinary Least Squares b b Ordinary Least Squares (OLS) is a method of finding the linear model which minimizes the sum of the squared errors. b b Such a model provides the best explanation/prediction of the data.

13
Why Least Squared error? b b Why not simply minimum error? b b The error’s about the line sum to 0.0! b b Minimum absolute deviation (error) models now exist, but they are mathematically cumbersome. b b Try algebra with | Absolute Value | signs!

14
Other models are possible... b b Best parabola...? (i.e. nonlinear or curvilinear relationships) b b Best maximum likelihood model... ? b b Best expert system...? b b Complex Systems…? Chaos models Catastrophe models others

15
The Simple Linear Virtue b b I think we over emphasize the linear model. b b It does, however, embody this rather important notion that Y is proportional to X. b b We can state such relationships in simple English. As unemployment increases, so does the crime rate.

16
The Notion of Linear Change b b The linear aspect means that the same amount of increase unemployment will have the same effect on crime at both low and high unemployment. b b A nonlinear change would mean that as unemployment increased, its impact upon the crime rate might increase at higher unemployment levels.

17
Why squared error? b b Because: (1) the sum of the errors expressed as deviations would be zero as it is with standard deviations, and (2) some feel that big errors should be more influential than small errors. b b Therefore, we wish to find the values of a and b that produce the smallest sum of squared errors.

18
Minimizing the Sum of Squared Errors b b Who put the Least in OLS b b In mathematical jargon we seek to minimize the Unexplained Sum of Squares (USS), where:

19
The Parameter estimates b b In order to do this, we must find parameter estimates which accomplish this minimization. b b In calculus, if you wish to know when a function is at its minimum, you take the first derivative. b b In this case we must take partial derivatives since we have two parameters (a & b) to worry about. b b We will look closer at this and it’s not a pretty sight!

20
Why squared error? b b Because (1) the sum of the errors expressed as deviations would be zero as it is with standard deviations, and (2) some feel that big errors should be more influential than small errors. b b Therefore, we wish to find the values of a and b that produce the smallest sum of squared errors.

21
Decomposition of the error in LS

22
Sum of Squares Terminology b b In mathematical jargon we seek to minimize the Unexplained Sum of Squares (USS), where:

23
The Parameter estimates b b In order to do this, we must find parameter estimates which accomplish this minimization. b b In calculus, if you wish to know when a function is at its minimum, you take the first derivative. b b In this case we must take partial derivatives since we have two parameters to worry about.

24
Tests of Inference b b t-tests for coefficients b b F-test for entire model

25
T-Tests b Since we wish to make probability statements about our model, we must do tests of inference. b Fortunately,

26
Goodness of Fit b b Since we are interested in how well the model performs at reducing error, we need to develop a means of assessing that error reduction. Since the mean of the dependent variable represents a good benchmark for comparing predictions, we calculate the improvement in the prediction of Y i relative to the mean of Y (the best guess of Y with no other information).

27
Sums of Squares b b This gives us the following 'sum-of-squares' measures: b b Total Variation = Explained Variation + Unexplained Variation

28
Sums of Squares Confusion b b Note: Occasionally you will run across ESS and RSS which generate confusion since they can be used interchangeably. ESS can be error sums-of-squares or estimated or explained SSQ. Likewise RSS can be residual SSQ or regression SSQ. Hence the use of USS for Unexplained SSQ in this treatment.

29
This gives us the F test:

30
Measures of Goodness of fit b b The Correlation coefficient b b r-squared

31
The correlation coefficient b b A measure of how close the residuals are to the regression line b b It ranges between -1.0 and +1.0 b b It is closely related to the slope.

32
R 2 (r-square) b b The r 2 (or R-square) is also called the coefficient of determination.

33
Tests of Inference b b t-tests for coefficients b b F-test for entire model Since we are interested in how well the model performs at reducing error, we need to develop a means of assessing that error reduction. Since the mean of the dependent variable represents a good benchmark for comparing predictions, we calculate the improvement in the prediction of Yi relative to the mean of Y (the best guess of Y with no other information). This gives us the following 'sums-of-squares' measures:

34
Goodness of fit b b The correlation coefficient A measure of how close the residuals are to the regression line It ranges between -1.0 and +1.0 b b r2 (r-square) The r-square (or R-square) is also called the coefficient of determination

35
Extra Material on OLS: The Adjusted R 2 b b Since R 2 always increases with the addition of a new variable, the adjusted R 2 compensates for added explanatory variables.

36
Extra Material on OLS: The F-test b b In addition, the F test for the entire model must be adjusted to compensate for the changed degrees of freedom. b b Note that F increases as n or R 2 increases and decreases as k increases Adding a variable will always increase R 2, but not necessarily adjusted R 2 or F. In addition values of R 2 below 0.0 are possible.

Similar presentations

OK

Multiple regression - Inference for multiple regression - A case study IPS chapters 11.1 and 11.2 © 2006 W.H. Freeman and Company.

Multiple regression - Inference for multiple regression - A case study IPS chapters 11.1 and 11.2 © 2006 W.H. Freeman and Company.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on articles of association of private Ppt on diode transistor logic circuits Ppt on information technology industry in india Ppt on power system stability definition Ppt on online examination project on java Convert doc file to ppt online ticket Elaine marieb anatomy and physiology ppt on cells Ppt on public provident fund Technical seminar ppt on recent interactive voice response system Ppt on natural resources in india