1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.

Slides:



Advertisements
Similar presentations
SJS SDI_21 Design of Statistical Investigations Stephen Senn 2 Background Stats.
Advertisements

Multiple Regression.
Autocorrelation and Heteroskedasticity
Generalized Method of Moments: Introduction
Multilevel analysis with EQS. Castello2004 Data is datamlevel.xls, datamlevel.sav, datamlevel.ess.
Multiple Regression Analysis
Topic 12: Multiple Linear Regression
Classical Linear Regression Model
The Simple Regression Model
Structural Equation Modeling
Multi-sample Equality of two covariance matrices.
Chapter 12 Simple Linear Regression
Missing Data Analysis. Complete Data: n=100 Sample means of X and Y Sample variances and covariances of X Y
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
Hypothesis Testing Steps in Hypothesis Testing:
Chapter 10 Curve Fitting and Regression Analysis
1 Simple Linear Regression and Correlation The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES Assessing the model –T-tests –R-square.
Chapter 12 Simple Linear Regression
Structural Equation Modeling
The Simple Linear Regression Model: Specification and Estimation
Structural Equation Modeling
Simple Linear Regression Analysis
Lecture 2 (Ch3) Multiple linear regression
FIN357 Li1 The Simple Regression Model y =  0 +  1 x + u.
Msam07, Albert Satorra 1 Examples with Coupon data (Bagozzi, 1994)
LECTURE 16 STRUCTURAL EQUATION MODELING.
Lorelei Howard and Nick Wright MfD 2008
Relationships Among Variables
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Example of Simple and Multiple Regression
Multiple Sample Models James G. Anderson, Ph.D. Purdue University.
Regression Method.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Chapter Inference on the Least-Squares Regression Model and Multiple Regression 14.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
MTH 161: Introduction To Statistics
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
1 Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u.
Byron Gangnes Econ 427 lecture 3 slides. Byron Gangnes A scatterplot.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Chapter Three TWO-VARIABLEREGRESSION MODEL: THE PROBLEM OF ESTIMATION
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Controlling for Baseline
G Lecture 81 Comparing Measurement Models across Groups Reducing Bias with Hybrid Models Setting the Scale of Latent Variables Thinking about Hybrid.
Environmental Modeling Basic Testing Methods - Statistics III.
Estimating and Testing Hypotheses about Means James G. Anderson, Ph.D. Purdue University.
G Lecture 91 Measurement Error Models Bias due to measurement error Adjusting for bias with structural equation models Examples Alternative models.
Ch14: Linear Least Squares 14.1: INTRO: Fitting a pth-order polynomial will require finding (p+1) coefficients from the data. Thus, a straight line (p=1)
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
1 Prof. Dr. Rainer Stachuletz Multiple Regression Analysis y =  0 +  1 x 1 +  2 x  k x k + u 1. Estimation.
Math 4030 – 11b Method of Least Squares. Model: Dependent (response) Variable Independent (control) Variable Random Error Objectives: Find (estimated)
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Université d’Ottawa / University of Ottawa 2001 Bio 8100s Applied Multivariate Biostatistics L11.1 Lecture 11: Canonical correlation analysis (CANCOR)
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
Simple and multiple regression analysis in matrix form Least square Beta estimation Beta Simple linear regression Multiple regression with two predictors.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Heteroscedasticity Heteroscedasticity is present if the variance of the error term is not a constant. This is most commonly a problem when dealing with.
Basic Estimation Techniques
Ch3: Model Building through Regression
Fundamentals of regression analysis 2
Modelos para datos longitudinales
Basic Estimation Techniques
Simple Linear Regression
Structural Equation Modeling
Tutorial 1: Misspecification
Simple Linear Regression
Topic 11: Matrix Approach to Linear Regression
Presentation transcript:

1 Regression as Moment Structure

2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure  =   2  XX +  vv  XX  =  XX  XX Parameter vector  = ( ,  XX,  vv )’

3 Sample: z 1, z 2,..., z n n iid Sample Moments S = n -1  z i z i ’ s yy s yx S = s yx s xx Fitting S to  =  Estimator  S close to  3 moment equations s yy =  2  XX +  vv s yx =  XX s xx =  XX with 3 (unknown) parameters Parameter estimates  = (s yx /s xx, s XX, s yy - (  ) 2 s XX )’  is the same as the usual OLS estimate of  ^ ^ ^ ^ ^ ^

4 Regression Equation Y =  x + v X = x + u Observable Variables Y z = X Moment structure  =   2  XX +  vv  XX  =  XX  XX +  uu Parameter vector  = ( ,  XX,  vv,  uu )’ new parameter

5 Sample: z 1, z 2,..., z n n iid Sample Moments S := n -1  z i z i ’ s yy s yx S = s yx s xx Fitting S to  =  Estimator  = S close to  3 moment equations s yy =  2  xx +  vv s yx =  xx s xx =  xx +  uu with 4 (unknown) parameters Parameter estimates  = ??  is the same as the usual OLS estimate of  ^ ^^ ^ ^

6 The effect of measurement error in regression x Y X u v  Y =  (X -u)+ v =  X + (v -  u) =  X + w, where w = v -  u Note that w is correlated with X, unless u or  equals zero So, the classical LS estimate b of  is neither ubiased, neither consistent. In fact, b --->  YX /  XX =  xx /  XX )= k  k is the so called Fiability coefficient (reliability of X). Since 0  k  1 b suffers from downward bias

7 Regression Equation Y =   x 1 +   x  p x p + v X k = x k + u k Observable Variables b = S XX -1 S XY does not converge to  b* := (S XX -  uu ) -1 S XY In multiple regression Examples with EQS of regression with error in variables Using suplementary information to assessing the magnitude of variances of errors in variables.

8 Path analysis & covariance structure Example with ROS data

9 Sample covariance matrix ROS92 ROS93ROS94 ROS95 ROS ROS ROS ROS Mean: n = 70 ROS92 ROS93 ROS94 F b1b2b3 SEM: bj = ? It is a valid model ?

10 Calculations b 1 b 2 = b 1 b 3 = b 2 b 3 = b 1 b 2 /b 1 b 3 = b 2 /b 3 = 29.56/ > b 2 =.978b = b 2 b 3 = b 3 (.978b 3 ) --> b 3 2 = 31.09/.978 b 3 = 5.64 In the same way, we obtain b 1 =5.34 b 2 =5.52 Model test in this case is CHI2 = 0, df = 0

11 Fitted Model R92R94R93 F CHI2 = 0, df =

12 /TITLE FACTOR ANALYSIS MODEL (EXAMPLE ROS) /SPECIFICATIONS CAS=70; VAR=4; /LABEL V1=ROS92; V2=ROS93; V3=ROS94; V4=ROS95; /EQUATIONS V1 = *F1 + E1; V2 = *F1 + E2; V3 = *F1 + E3; /VARIANCES F1 = 1.0; E1 TO E3 = *; /COVARIANCES /MATRIX /END

13 ROS92 =V1 = 5.359*F E ROS93 =V2 = 5.516*F E ROS94 =V3 = 5.637*F E VARIANCES OF INDEPENDENT VARIABLES E D E1 -ROS *I I I I I I I I E2 -ROS *I I I I I I I I E3 -ROS *I I I I I I I I

14 … with the help of EQS RESIDUAL COVARIANCE MATRIX (S-SIGMA) : ROS92 ROS93 ROS94 V 1 V 2 V 3 ROS92 V ROS93 V ROS94 V CHI-SQUARE = BASED ON 0 DEGREES OF FREEDOM STANDARDIZED SOLUTION: ROS92 =V1 =.631*F E1 ROS93 =V2 =.917*F E2 ROS94 =V3 =.827*F E3

15 one - factor four- indicators model R93R95R94 F ** * CHI2 = ?, df = ? p-value = ? R92 * ****

16 … with the help of EQS /TITLE FACTOR ANALYSIS MODEL (EXAMPLE ROS) ! This line is not read /SPECIFICATIONS CAS=70; VAR=4; /LABEL V1=ROS92; V2=ROS93; V3=ROS94; V4=ROS95; /EQUATIONS V1 = *F1 + E1; V2 = *F1 + E2; V3 = *F1 + E3; V4 = *F1 + E4; /VARIANCES F1 = 1.0; E1 TO E4 = *; /COVARIANCES /MATRIX /END

17 ROS92 =V1 = 4.998*F E ROS93 =V2 = 4.837*F E ROS94 =V3 = 6.417*F E ROS95 =V4 = 5.393*F E VARIANCES OF INDEPENDENT VARIABLES E D E1 -ROS *I I I I I I I I E2 -ROS *I I I I I I I I E3 -ROS *I I I I I I I I E4 -ROS *I I I I I I … with the help of EQS

18 RESIDUAL COVARIANCE MATRIX (S-SIGMA) : RESIDUAL COVARIANCE MATRIX (S-SIGMA) : ROS92 ROS93 ROS94 ROS95 V 1 V 2 V 3 V 4 ROS92 V ROS93 V ROS94 V ROS95 V CHI-SQUARE = BASED ON 2 DEGREES OF FREEDOM PROBABILITY VALUE FOR THE CHI-SQUARE STATISTIC IS STANDARDIZED SOLUTION: ROS92 =V1 =.631*F E1 ROS93 =V2 =.917*F E2 ROS94 =V3 =.827*F E3

19 Fitted Model R93R95R94 F CHI2 = 6.27, df = 2 p-value =.043 R

20 /TITLE FACTOR ANALYSIS MODEL (EXAMPLE ROS) /SPECIFICATIONS CAS=70; VAR=4; /LABEL V1=ROS92; V2=ROS93; V3=ROS94; V4=ROS95; /EQUATIONS V1 = *F1 + E1; V2 = *F1 + E2; V3 = *F1 + E3; V4 = *F1 + E4; /VARIANCES F1 = 1.0; E1 TO E4 = *; /COVARIANCES /CONSTRAINTS (V1,F1)=(V2,F1)=(V3,F1)=(V4,F1); /MATRIX /END

21 … estimation results ROS92 =V1 = 5.521*F E ROS93 =V2 = 5.521*F E ROS94 =V3 = 5.521*F E ROS95 =V4 = 5.521*F E CHI-SQUARE = BASED ON 5 DEGREES OF FREEDOM PROBABILITY VALUE FOR THE CHI-SQUARE STATISTIC IS

22... EQS use an iterative optimization method ITERATIVE SUMMARY PARAMETER ITERATION ABS CHANGE ALPHA FUNCTION

23 Exercise: a)Write the covariance structure for the one - factor four- indicators model b) From the ML estimates of this model, shown in previous slides, compute the fitted covariance matrix. c) In relation with b), compute the residual covariance matrix Note: For c), use the following sample moments: ROS92 ROS93ROS94 ROS95 ROS ROS ROS ROS Mean: n = 70

24 one - factor four- indicators model with means R93R95R94 F ** * CHI2 = ?, df = ? p-value = ? R92 * **** 1 * * * *

25 /TITLE FACTOR ANALYSIS MODEL (EXAMPLE ROS data) /SPECIFICATIONS CAS=70; VAR=4; ANALYSIS = MOMENT; /LABEL V1=ROS92; V2=ROS93; V3=ROS94; V4=ROS95; /EQUATIONS V1 = *V999+ *F1 + E1; V2 = *V999+ *F1 + E2; V3 = *V999+ *F1 + E3; V4 = *V999+ *F1 + E4; /VARIANCES F1 = 1.0; E1 TO E4 = *; /COVARIANCES /CONSTRAINTS ! (V1,F1)=(V2,F1)=(V3,F1)=(V4,F1); /MATRIX /MEANS /END

26 ROS92 =V1 = 6.270*V *F E ROS93 =V2 = 7.350*V *F E ROS94 =V3 = *V *F E ROS95 =V4 = 8.800*V *F E VARIANCES OF INDEPENDENT VARIABLES E D E1 -ROS *I I I I I I I I E2 -ROS *I I I I I I I I E3 -ROS *I I I I I I I I E4 -ROS *I I I I I I