G89.2247 Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation.

Slides:



Advertisements
Similar presentations
The Maximum Likelihood Method
Advertisements

Continued Psy 524 Ainsworth
G Lecture 10 SEM methods revisited Multilevel models revisited
1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
The Simple Regression Model
GENERAL LINEAR MODELS: Estimation algorithms
Data: Crab mating patterns Data: Typists (Poisson with random effects) (Poisson Regression, ZIP model, Negative Binomial) Data: Challenger (Binomial with.
Lecture 6 (chapter 5) Revised on 2/22/2008. Parametric Models for Covariance Structure We consider the General Linear Model for correlated data, but assume.
Lecture 4 (Chapter 4). Linear Models for Correlated Data We aim to develop a general linear model framework for longitudinal data, in which the inference.
GEE and Mixed Models for longitudinal data
Comparison of Repeated Measures and Covariance Analysis for Pretest-Posttest Data -By Chunmei Zhou.
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
LINEAR REGRESSION: Evaluating Regression Models Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Assumptions for Linear Regression Evaluating a Regression Model.
LINEAR REGRESSION: Evaluating Regression Models. Overview Standard Error of the Estimate Goodness of Fit Coefficient of Determination Regression Coefficients.
The Simple Linear Regression Model: Specification and Estimation
Linear Modelling III Richard Mott Wellcome Trust Centre for Human Genetics.
LECTURE 13 ANALYSIS OF COVARIANCE AND COVARIANCE INTERACTION and ATI (Aptitude-Treatment Interaction)
7/2/ Lecture 51 STATS 330: Lecture 5. 7/2/ Lecture 52 Tutorials  These will cover computing details  Held in basement floor tutorial lab,
G Lecture 51 Estimation details Testing Fit Fit indices Arguing for models.
Longitudinal Data Analysis: Why and How to Do it With Multi-Level Modeling (MLM)? Oi-man Kwok Texas A & M University.
G Lecture 111 SEM analogue of General Linear Model Fitting structure of mean vector in SEM Numerical Example Growth models in SEM Willett and Sayer.
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
GEE and Generalized Linear Mixed Models
Introduction to Multilevel Modeling Using SPSS
STA291 Statistical Methods Lecture 27. Inference for Regression.
1 G Lect 11W Logistic Regression Review Maximum Likelihood Estimates Probit Regression and Example Model Fit G Multiple Regression Week 11.
© 2002 Prentice-Hall, Inc.Chap 14-1 Introduction to Multiple Regression Model.
G Lecture 81 Example of Random Regression Model Fitting What if outcome is not normal? Marginal Models and GEE Example of GEE for binary outcome.
Covariance structures in longitudinal analysis Which one to choose?
G Lecture 5 Example fixed Repeated measures as clustered data
Application of repeated measurement ANOVA models using SAS and SPSS: examination of the effect of intravenous lactate infusion in Alzheimer's disease Krisztina.
Lecture 3: Inference in Simple Linear Regression BMTRY 701 Biostatistical Methods II.
Modeling Repeated Measures or Longitudinal Data. Example: Annual Assessment of Renal Function in Hypertensive Patients UNITNOYEARAGESCrEGFRPSV
Multilevel Linear Models Field, Chapter 19. Why use multilevel models? Meeting the assumptions of the linear model – Homogeneity of regression coefficients.
Vilija R. Joyce, MS January 11, 2012 Modeling Utilities over Time.
Repeated Measurements Analysis. Repeated Measures Analysis of Variance Situations in which biologists would make repeated measurements on same individual.
Multilevel Linear Modeling aka HLM. The Design We have data at two different levels In this case, 7,185 students (Level 1) Nested within 160 Schools (Level.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
BUSI 6480 Lecture 8 Repeated Measures.
Scatterplots & Regression Week 3 Lecture MG461 Dr. Meredith Rolfe.
Multivariate Statistics Confirmatory Factor Analysis I W. M. van der Veld University of Amsterdam.
Mixed Effects Models Rebecca Atkins and Rachel Smith March 30, 2015.
1 G Lect 2w Review of expectations Conditional distributions Regression line Marginal and conditional distributions G Multiple Regression.
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
1 Statistics 262: Intermediate Biostatistics Regression Models for longitudinal data: Mixed Models.
Vilija R. Joyce, MS November 14, 2012 Modeling Health-Related Quality of Life over Time.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
1 G Lect 3M Regression line review Estimating regression coefficients from moments Marginal variance Two predictors: Example 1 Multiple regression.
Biostatistics Case Studies Peter D. Christenson Biostatistician Session 3: Missing Data in Longitudinal Studies.
1 G Lect 4W Multiple regression in matrix terms Exploring Regression Examples G Multiple Regression Week 4 (Wednesday)
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Lecturer: Ing. Martina Hanová, PhD.. Regression analysis Regression analysis is a tool for analyzing relationships between financial variables:  Identify.
Jump to first page Bayesian Approach FOR MIXED MODEL Bioep740 Final Paper Presentation By Qiang Ling.
Chapter 14 Introduction to Multiple Regression
Linear Mixed Models in JMP Pro
Mixed models and their uses in meta-analysis
The Maximum Likelihood Method
CJT 765: Structural Equation Modeling
Introduction to Longitudinal Data Analysis Lisa Wang Jan. 29, 2015
I271B Quantitative Methods
G Lecture 6 Multilevel Notation; Level 1 and Level 2 Equations
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
OVERVIEW OF LINEAR MODELS
A Gentle Introduction to Linear Mixed Modeling and PROC MIXED
OVERVIEW OF LINEAR MODELS
Presentation transcript:

G Lecture 71 Revisiting Hierarchical Mixed Models A General Version of the Model Variance/Covariances of Two Kinds of Random Effects Parameter Estimation Details Examples of Alternative Models

G Lecture 72 Revisiting the Hierarchical Mixed Models Last week we considered a two level model for the j th person at the i th time. Let Y be some outcome, X some time-dependent process (such as measure of time) and D be some between-person variable.  LEVEL 1: Y ij =  0j +  1j X ij + r ij  LEVEL 2  0j =  00 +  01 D j + U 0j  1j =  10 +  11 D j + U 1j The combined model is  Y ij =  00 +  10 X ij +  01 D j +  11 (X ij *D j )+U 0j +U 1j X ij +r ij

G Lecture 73 Two Kinds of Random Effects U 0j and U 1j are assumed to be normally distributed with mean zero and covariance matrix Last week we assumed that r ij was iid Normal with mean zero and variance    We assumed that the Level 1 model accounted for within- person dependency of scores.

G Lecture 74 A More General Model We don't have to assume that the r ij terms are uncorrelated within person. Suppose we write a vector of within person variables over n time points as where Y j is the n  1 list of outcomes, W j is the n  4 array of codes for the fixed effects, and Z j is the array of codes for the random effects. The fixed effect coefficients are collected in the vector , and the random effect coefficients are collected in U j. The residuals are in a list r j.

G Lecture 75 A Numerical Example Suppose that we have a person from group 0 measures at times 0,1,2 with scores for Y, 2.329, 3.449, and We write

G Lecture 76 More on the General Model W j contains constants that say which of the fixed effect constants (  ) apply to Y j Z j contains information relating the random effects (U j ) to the set of Y ij values in Y j r j contains the residuals from the within-person linear fit. Var(U j ) = T Var(r j ) = R Var(Y j ) = [Z j TZ T j + R] = V

G Lecture 77 Thinking about Var(r j ) = R The matrix R is the expected covariance among the residuals, after taking into account the fixed effects (  ) and the random regression effects (U j ).  If the residuals are uncorrelated with common variance R =   I (a diagonal matrix) This was implicitly assumed so far.  Other structures for R can be considered, such as autoregressive, AR(1), or moving average, MA(1)

G Lecture 78 The General Model: Sample Level Suppose we string the Y j vectors together:

G Lecture 79 A Numerical Example  ---W----  Z  Y

G Lecture 710 Estimation Using General Framework If the observations were all independent, the general equation would be and the OLS estimates of the fixed regression effectes,  would be

G Lecture 711 Estimation Using General Framework In the general case, the estimates of the fixed effects need to take into account both the random effects and the correlated residuals. If the matrices T and R were known, we would have where Var(Y j ) = [Z j TZ T j + R] = V

G Lecture 712 Interative Solution in General Case PROC MIXED  1) estimates the fixed effects using ordinary least squares  2) estimates T and R using ML or REML on residuals  3) estimates fixed effects using weighted least squares  4) re-estimates T and R using ML/REML and so on

G Lecture 713 Theory and Practice In theory we can estimate both Var(U)=T and Var(r)=R In practice, the estimates need to be structured. Consider two patterns for R

G Lecture 714 Simulated Data Example

G Lecture 715 PROC MIXED Syntax filename myimport 'c:\mixedex.por'; proc convert spss=myimport out=sasuser.mixedex; TITLE1 'SMALL SIMULATED RANDOM COEFFICIENTS DATA'; run; proc mixed covtest; class sub; model y=g time g*time /s; random intercept / subject=sub g gcorr type=un; REPEATED /TYPE=AR(1) SUBJECT=SUB R RCORR; TITLE2 'RANDOM INTERCEPT/SLOPE W CORRELATED RESIDUALS'; run;

G Lecture 716 Bar Anxiety Study: Estimating R as AR(1) To Estimate R, we include a REPEATED statement in PROC MIXED PROC MIXED NOCLPRINT COVTEST METHOD=REML; CLASS id; MODEL anx=group week group*week /s; RANDOM intercept week /SUBJECT=id type=un gcorr; REPEATED /TYPE=AR(1) SUBJECT=ID R RCORR; TITLE2 '2RANDOMEFFECTS: ASSUMES RESIDUALS HAVE AR(1) CORR PATTERN';

G Lecture 717 Iteration Results Iteration History Iteration Evaluations -2 Res Log Like Criterion Convergence criteria met.

G Lecture 718 Random Effect Estimates Estimated R Correlation Matrix for id 1 Row Col1 Col2 Col3 Col Estimated G Correlation Matrix Row Effect id Col1 Col2 1 Intercept week

G Lecture 719 Random Effects Continued Covariance Parameter Estimates Standard Z Cov Parm Subject Estimate Error Value Pr Z UN(1,1) id UN(2,1) id UN(2,2) id 1.06E AR(1) id <.0001 Residual <.0001

G Lecture 720 Fixed Effects for AR(1) Model Solution for Fixed Effects Standard Effect Estimate Error DF t Value Pr > |t| Intercept <.0001 group <.0001 week <.0001 group*week <.0001

G Lecture 721 Compare to Model Assuming R=I Fixed Effects are quite similar

G Lecture 722 Comparing Alternative Models Two tools are useful  AIC: Akaike's Information Criterion Based on the log likelihood, but adjusts for number of parameters that are estimated (the large the number of estimates, the smaller the log likelihood) Can be compared across REML runs Closer to zero is better  -2*Log Likelihood comparisons Useful for nested models Should be estimated using ML rather than REML

G Lecture 723 Example: R=I vs R=AR(1) Original analysis that assumed that R=I  AIC from REML analysis = 784  -2LL from ML = New analysis with R estimated with AR(1)  AIC from REML analysis = 772  -2LL from ML = One new parameter estimate reduced –2LL by = 5.6. Chi Square on 1df is significant.  AR(1) appears to be better model in terms of fit Random effect for slope goes away! Fixed effect results are very similar