SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, 91-103, 124-130.

Slides:



Advertisements
Similar presentations
SEM PURPOSE Model phenomena from observed or theoretical stances
Advertisements

Lesson 10: Linear Regression and Correlation
Structural Equation Modeling
4.3 Confidence Intervals -Using our CLM assumptions, we can construct CONFIDENCE INTERVALS or CONFIDENCE INTERVAL ESTIMATES of the form: -Given a significance.
Structural Equation Modeling
Correlation 2 Computations, and the best fitting line.
“Ghost Chasing”: Demystifying Latent Variables and SEM
Structural Equation Modeling
Multicollinearity Omitted Variables Bias is a problem when the omitted variable is an explanator of Y and correlated with X1 Including the omitted variable.
Intro to Parametric Statistics, Assumptions & Degrees of Freedom Some terms we will need Normal Distributions Degrees of freedom Z-values of individual.
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Simple Linear Regression Analysis
Relationships Among Variables
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Factor Analysis Psy 524 Ainsworth.
Chapter 8: Bivariate Regression and Correlation
Example of Simple and Multiple Regression
Lecture 15 Basics of Regression Analysis
Multiple Sample Models James G. Anderson, Ph.D. Purdue University.
CPE 619 Simple Linear Regression Models Aleksandar Milenković The LaCASA Laboratory Electrical and Computer Engineering Department The University of Alabama.
Simple Linear Regression Models
Chapter 15 Correlation and Regression
2 nd Order CFA Byrne Chapter 5. 2 nd Order Models The idea of a 2 nd order model (sometimes called a bi-factor model) is: – You have some latent variables.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
CHAPTER 14 MULTIPLE REGRESSION
SEM: Basics Byrne Chapter 1 Tabachnick SEM
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
Estimation Kline Chapter 7 (skip , appendices)
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Confirmatory Factor Analysis Psych 818 DeShon. Construct Validity: MTMM ● Assessed via convergent and divergent evidence ● Convergent – Measures of the.
Regression Chapter 16. Regression >Builds on Correlation >The difference is a question of prediction versus relation Regression predicts, correlation.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Full Structural Models Kline Chapter 10 Brown Chapter 5 ( )
Univariate Linear Regression Problem Model: Y=  0 +  1 X+  Test: H 0 : β 1 =0. Alternative: H 1 : β 1 >0. The distribution of Y is normal under both.
Multigroup Models Byrne Chapter 7 Brown Chapter 7.
Chapter Three TWO-VARIABLEREGRESSION MODEL: THE PROBLEM OF ESTIMATION
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Discussion of time series and panel models
Multigroup Models Beaujean Chapter 4 Brown Chapter 7.
Latent Growth Modeling Byrne Chapter 11. Latent Growth Modeling Measuring change over repeated time measurements – Gives you more information than a repeated.
G Lecture 81 Comparing Measurement Models across Groups Reducing Bias with Hybrid Models Setting the Scale of Latent Variables Thinking about Hybrid.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Estimating and Testing Hypotheses about Means James G. Anderson, Ph.D. Purdue University.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
MTMM Byrne Chapter 10. MTMM Multi-trait multi-method – Traits – the latent factors you are trying to measure – Methods – the way you are measuring the.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Estimation Kline Chapter 7 (skip , appendices)
CFA Model Revision Byrne Chapter 4 Brown Chapter 5.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
The SweSAT Vocabulary (word): understanding of words and concepts. Data Sufficiency (ds): numerical reasoning ability. Reading Comprehension (read): Swedish.
Regression. Why Regression? Everything we’ve done in this class has been regression: When you have categorical IVs and continuous DVs, the ANOVA framework.
CFA: Basics Byrne Chapter 3 Brown Chapter 3 (40-53)
Full Structural Models Byrne Chapter 6 Kline Chapter 10 (stop 288, but the reporting section at the end is good to mark for later)
Lecture 2 Survey Data Analysis Principal Component Analysis Factor Analysis Exemplified by SPSS Taylan Mavruk.
Stats Methods at IC Lecture 3: Regression.
Advanced Statistical Methods: Continuous Variables
Chapter 14 Inference on the Least-Squares Regression Model and Multiple Regression.
Regression.
CJT 765: Structural Equation Modeling
Multiple Regression.
Regression Analysis.
Fundamentals of regression analysis
Structural Equation Modeling
Simple Linear Regression
Multicollinearity What does it mean? A high degree of correlation amongst the explanatory variables What are its consequences? It may be difficult to separate.
Regression Part II.
Structural Equation Modeling
Presentation transcript:

SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,

Kline! Kline (page 7-8) talks about the different types of approaches: – Strictly confirmatory – Alternative models – Model generation

Types of SEM Strictly confirmatory – the Byrne approach – You have a theorized model and you accept or reject it only.

Types of SEM Alternative models – comparison between many different models of the construct – This type typically happens when different theories posit different things Like is it a 6 factor model or 4 factor model?

Types of SEM Model generating – the original model doesn’t work, so you edit it. (this is where you might modify the order or variables or the places that arrows go with the same variables)

Specification Specification is the term for generating the model hypothesis and drawing out how you think the variables are related.

Specification Errors Omitted predictors that are important but you left them out – LOVE – left out variable error

Covariances To be able to understand identification, you have to understand that SEM is an analysis of covariances – You are trying to explain as much of the variance between variables with your model

Covariances You can also estimate a mean structure – Usually when you want to estimate factor means (actual numbers for those bubbles). – You can compare factor means across groups as an analysis.

Sample Size The N:q rule – Number of people = N – q number of estimated parameters (will explain in a bit) You want the N:q ratio to be 20:1 or greater in a perfect world, 10:1 if you can manage it.

Identification Essentially, models that are identified have a unique answer (also invertable matrix) – That means that you have one probable answer for all the parameters you are estimating – If lots of possible answers exist (like saying X + Y = some number), then the model is not identified.

Identification Identification is tied to: – Parameters to be estimated – Degrees of Freedom

Identification Free parameter – will be estimated from the data Fixed parameter – will be set to a specific value (i.e. usually 1).

Identification Constrained parameter – estimated from the data with some specific rule – I.e. Setting multiple paths to some variable name (like cheese). They will be estimated but forced to all be the same – Also known as an equality constraint

Identification Cross group equality constraints – mostly used in multigroup models, forces the same paths to be equal (but estimated) for each group

Identification Other constraints that aren’t use very often: – Proportionality constraint – Inequality constraint – Nonlinear constraints

Figuring out what’s estimated So each path without a one will be estimated: -4 paths (regression coefficients) Then each error term variance (not shown) will be estimated: -6 variances -Remember the paths will not be estimated because they include a 1 on them. Each factor variance will be estimated: -2 variances The covariance arrow will be estimated: -1 covariance 1 1

Degrees of Freedom Note: DF now has nothing to do with sample size. Possible parameters – P(P+1) / 2 – Where P = number of observed variables

Degrees of Freedom P for our model = 6 (6+1) / 2 = 21 DF = possible parameters – estimated parameters – df = 21 – 13 = 8

Identification Just identified – you have as many things to estimate as you do degrees of freedom – That means that df = 0. – EEEK.

Identification Over identified – when you have more parameters you could estimate than you do – df is a positive number. – GREAT!

Identification Under identified – you are estimating more parameters than possible options you have – df = negative – BAD!

Identification Empirical under identification – when two observed variables are highly correlated, which effectively reduces the number of parameters you can estimate

Identification Even if you have an over identified model, you can have under identified sections. – You will usually get an error in the output you get saying something to the effect of “you suck you forgot a 1 somewhere”.

Identification The reference variable is the one you set to 1. – That helps with the df to keep over identification, gives the variables a scale, and generally helps things run smoothly. A cool note: the variable you set does not matter. – Except in very strange cases where that particular observed variable has no relationship with the latent variable.

Identification Another note: The reference variable will not have an estimated unstandardized parameter. – But you will get a standardized parameter, so you can check if the variable is loading like what you think it should. – If you want to get a p value for that parameter, you can run the model once, then change the reference variable, and run again.

A side note The section on second order factors we will cover more in depth when we get to CFA – The important part is making sure each section of the model is identified, so you’ll notice that (page 36) the variance is set to 1 on the second latent to solve that problem. You can also set a path to 1.

What to do? If you have a complex model: – Start small – work with the measurement model components first, since they have simple identification rules – Then slowly add variables to see where the problem occurs.

Kline stuff Chapter 2 = a great review of regression techniques Chapter 3 = data screening review (next slide is over page 50-51) Chapter 4 = tells you about the types of programs available

Kline Stuff Chapter 5 – specification, what the symbols are etc. Chapter 6 – Identification (covered a lot of this) – Page 130 on has specific identification guidelines that are good rules of thumb

Positive Definite Matrices One of the problems you’ll see running SEM is an error about “matrix not definite”. What that indicates is the following: – 1) matrix is singular – 2) eigenvalues are negative – 3) determinants are zero or negative – 4) correlations are out of bounds

Positive Definite Matrices Singular matrix – Simply put: each column has to indicate something unique – Therefore, if you have two columns that are perfectly correlated OR are linear transformations of each other, you will have a singular matrix.

Positive Definite Matrices Negative eigenvalues – remember that eigenvalues are combinations of variance – And variance is positive (it’s squared in the formula!) – So negative = bad.

Positive Definite Matrices Determinants = the products of eigenvalues – So, again, they cannot be negative. – A zero determinant indicates a singular matrix.

Positive Definite Matrices Out of bounds – basically that means that the data has correlations over 1 or negative variances (called a Heywood case).

For now Let’s talk about output in AMOS Make a small path model to run and look at output.

Up next Kline page Path Models Kline chapter 7 estimation methods Kline chapter 8 fit indices