Raymond J. Carroll Texas A&M University Postdoctoral Training Program: Non/Semiparametric.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

3.3 Hypothesis Testing in Multiple Linear Regression
Linear Regression.
GENERAL LINEAR MODELS: Estimation algorithms
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
FTP Biostatistics II Model parameter estimations: Confronting models with measurements.
3.2 OLS Fitted Values and Residuals -after obtaining OLS estimates, we can then obtain fitted or predicted values for y: -given our actual and predicted.
The General Linear Model Or, What the Hell’s Going on During Estimation?
Prediction, Correlation, and Lack of Fit in Regression (§11. 4, 11
Lecture 6 (chapter 5) Revised on 2/22/2008. Parametric Models for Covariance Structure We consider the General Linear Model for correlated data, but assume.
Raymond J. Carroll Texas A&M University Non/Semiparametric Regression and Clustered/Longitudinal Data.
Model assessment and cross-validation - overview
Chapter 2: Lasso for linear models
Data mining and statistical learning - lecture 6
Cox Model With Intermitten and Error-Prone Covariate Observation Yury Gubman PhD thesis in Statistics Supervisors: Prof. David Zucker, Prof. Orly Manor.
Raymond J. Carroll Texas A&M University Postdoctoral Training Program: Non/Semiparametric.
Forecasting JY Le Boudec 1. Contents 1.What is forecasting ? 2.Linear Regression 3.Avoiding Overfitting 4.Differencing 5.ARMA models 6.Sparse ARMA models.
Section 4.2 Fitting Curves and Surfaces by Least Squares.
The Simple Linear Regression Model: Specification and Estimation
Raymond J. Carroll Texas A&M University Nonparametric Regression and Clustered/Longitudinal Data.
Maximum likelihood (ML) and likelihood ratio (LR) test
Raymond J. Carroll Department of Statistics and Nutrition Texas A&M University Non/Semiparametric Regression.
Curve-Fitting Regression
Chapter 4 Multiple Regression.
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Nonparametric Regression and Clustered/Longitudinal Data
Score Tests in Semiparametric Models Raymond J. Carroll Department of Statistics Faculties of Nutrition and Toxicology Texas A&M University
Raymond J. Carroll Texas A&M University Postdoctoral Training Program: Non/Semiparametric.
1 An Introduction to Nonparametric Regression Ning Li March 15 th, 2004 Biostatistics 277.
Semiparametric Methods for Colonic Crypt Signaling Raymond J. Carroll Department of Statistics Faculty of Nutrition and Toxicology Texas A&M University.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Maximum likelihood (ML)
Classification and Prediction: Regression Analysis
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
Handling Data and Figures of Merit Data comes in different formats time Histograms Lists But…. Can contain the same information about quality What is meant.
Model Building III – Remedial Measures KNNL – Chapter 11.
Model Inference and Averaging
Andrew Thomson on Generalised Estimating Equations (and simulation studies)
MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? MULTIPLE SEGMENTS –MEDICAL VERSUS INDEMNITY –SAME LINE,
Outline 1-D regression Least-squares Regression Non-iterative Least-squares Regression Basis Functions Overfitting Validation 2.
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Modern Navigation Thomas Herring
Geographic Information Science
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Properties of OLS How Reliable is OLS?. Learning Objectives 1.Review of the idea that the OLS estimator is a random variable 2.How do we judge the quality.
Danila Filipponi Simonetta Cozzi ISTAT, Italy Outlier Identification Procedures for Contingency Tables in Longitudinal Data Roma,8-11 July 2008.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
3.4 The Components of the OLS Variances: Multicollinearity We see in (3.51) that the variance of B j hat depends on three factors: σ 2, SST j and R j 2.
Risk Analysis & Modelling Lecture 2: Measuring Risk.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
Simulation Study for Longitudinal Data with Nonignorable Missing Data Rong Liu, PhD Candidate Dr. Ramakrishnan, Advisor Department of Biostatistics Virginia.
Regression Analysis1. 2 INTRODUCTION TO EMPIRICAL MODELS LEAST SQUARES ESTIMATION OF THE PARAMETERS PROPERTIES OF THE LEAST SQUARES ESTIMATORS AND ESTIMATION.
Gaussian Process and Prediction. (C) 2001 SNU CSE Artificial Intelligence Lab (SCAI)2 Outline Gaussian Process and Bayesian Regression  Bayesian regression.
- 1 - Preliminaries Multivariate normal model (section 3.6, Gelman) –For a multi-parameter vector y, multivariate normal distribution is where  is covariance.
Biostatistics Regression and Correlation Methods Class #10 April 4, 2000.
I. Statistical Methods for Genome-Enabled Prediction of Complex Traits OUTLINE THE CHALLENGES OF PREDICTING COMPLEX TRAITS ORDINARY LEAST SQUARES (OLS)
Estimating standard error using bootstrap
Probability Theory and Parameter Estimation I
LECTURE 11: Advanced Discriminant Analysis
How to handle missing data values
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
What is Regression Analysis?
OVERVIEW OF LINEAR MODELS
OVERVIEW OF LINEAR MODELS
Generally Discriminant Analysis
Principles of the Global Positioning System Lecture 11
Instrumental Variables Estimation and Two Stage Least Squares
Presentation transcript:

Raymond J. Carroll Texas A&M University Postdoctoral Training Program: Non/Semiparametric Regression and Clustered/Longitudinal Data

College Station, home of Texas A&M University I-35 I-45 Big Bend National Park Wichita Falls, my hometown West Texas Palo Duro Canyon, the Grand Canyon of Texas Guadalupe Mountains National Park East Texas  Midland

Raymond CarrollAlan Welsh Naisyin WangEnno Mammen Xihong Lin Oliver Linton Acknowledgments Series of papers are on my web site Lin, Wang and Welsh: Longitudinal data (Mammen & Linton for pseudo- observation methods) Linton and Mammen: time series data

Outline Longitudinal models: panel data Background: splines = kernels for independent data Correlated data: do splines = kernels?

Panel Data (for simplicity) i = 1,…,n clusters/individuals j = 1,…,m observations per cluster SubjectWave 1Wave 2…Wave m 1XXX 2XXX …X nXXX

Panel Data (for simplicity) i = 1,…,n clusters/individuals j = 1,…,m observations per cluster Important points: The cluster size m is meant to be fixed This is not a multiple time series problem where the cluster size increases to infinity Some comments on the single time series problem are given near the end of the talk

The Marginal Nonparametric Model Y = Response X = time-varying covariate Question: can we improve efficiency by accounting for correlation?

Nonstandard Example: Colon carcinogenesis experiments. Cell-Based Measures in single rats of DNA damage, differentiation, proliferation, apoptosis, P27, etc.

The Marginal Nonparametric Model Important assumption Covariates at other waves are not conditionally predictive, i.e., they are surrogates This assumption is required for any GLS fit, including parametric GLS

Independent Data Splines (smoothing, P-splines, etc.) with penalty parameter = Ridge regression fit Some bias, smaller variance is over-parameterized least squares is a polynomial regression

Independent Data Kernels (local averages, local linear, etc.), with kernel density function K and bandwidth h As the bandwidth h  0, only observations with X near t get any weight in the fit

Independent Data Major methods Splines Kernels Smoothing parameters required for both Fits: similar in many (most?) datasets Expectation: some combination of bandwidths and kernel functions look like splines 12

Independent Data Splines and kernels are linear in the responses Silverman showed that there is a kernel function and a bandwidth so that the weight functions are asymptotically equivalent In this sense, splines = kernels

The weight functions G n (t=.25,x) in a specific case for independent data Kernel Smoothing Spline Note the similarity of shape and the locality: only X’s near t=0.25 get any weight

Working Independence Working independence: Ignore all correlations Fix up standard errors at the end Advantage: the assumption is not required Disadvantage: possible severe loss of efficiency if carried too far

Working Independence Working independence: Ignore all correlations Should posit some reasonable marginal variances Weighting important for efficiency Weighted versions: Splines and kernels have obvious analogues Standard method: Zeger & Diggle, Hoover, Rice, Wu & Yang, Lin & Ying, etc.

Working Independence Working independence: Weighted splines and weighted kernels are linear in the responses The Silverman result still holds In this sense, splines = kernels

Accounting for Correlation Splines have an obvious analogue for non- independent data Let be a working covariance matrix Penalized Generalized least squares (GLS) GLS ridge regression Because splines are based on likelihood ideas, they generalize quickly to new problems

Accounting for Correlation Splines have an obvious analogue for non- independent data Kernels are not so obvious One can do theory with kernels Local likelihood kernel ideas are standard in independent data problems Most attempts at kernels for correlated data have tried to use local likelihood kernel methods

Kernels and Correlation Problem: how to define locality for kernels? Goal: estimate the function at t Let be a diagonal matrix of standard kernel weights Standard Kernel method: GLS pretending inverse covariance matrix is The estimate is inherently local

Kernels and Correlation Specific case: m=3, n=35 Exchangeable correlation structure Red:  = 0.0 Green:  = 0.4 Blue:  = 0.8 Note the locality of the kernel method The weight functions G n (t=.25,x) in a specific case 18

Splines and Correlation Specific case: m=3, n=35 Exchangeable correlation structure Red:  = 0.0 Green:  = 0.4 Blue:  = 0.8 Note the lack of locality of the spline method The weight functions G n (t=.25,x) in a specific case

Splines and Correlation Specific case: m=3, n=35 Complex correlation structure Red: Nearly singular Green:  = 0.0 Blue:  = AR(0.8) Note the lack of locality of the spline method The weight functions G n (t=.25,x) in a specific case

Splines and Standard Kernels Accounting for correlation: Standard kernels remain local Splines are not local Numerical results can be confirmed theoretically

Results on Kernels and Correlation GLS with weights Optimal working covariance matrix is working independence! Using the correct covariance matrix Increases variance Increases MSE Splines Kernels (or at least these kernels) 24

Pseudo-Observation Kernel Methods Better kernel methods are possible Pseudo-observation: original method Construction: specific linear transformation of Y Mean =  (X) Covariance = diagonal matrix This adjusts the original responses without affecting the mean

Pseudo-Observation Kernel Methods Construction: specific linear transformation of Y Mean =  (X) Covariance = diagonal Iterative: Efficiency: More efficient than working independence Proof of Principle: kernel methods can be constructed to take advantage of correlation

Efficiency of Splines and Pseudo- Observation Kernels Exchng: Exchangeable with correlation 0.6 AR: autoregressive with correlation 0.6 Near Sing: A nearly singular matrix

What Do GLS Splines Do? GLS Splines are really working independence splines using pseudo-observations Let GLS Splines are working independence splines

GLS Splines and SUR Kernels GLS Splines are working independence splines Algorithm: iterate until convergence Idea: for kernels, do same thing This is Naisyin Wang’s SUR method

Better Kernel Methods: SUR Another view Consider current state in iteration For every j, assume function is fixed and known for Use the seemingly unrelated regression (SUR) idea For j, form estimating equation for local averages/linear for j th component only using GLS with weights Sum the estimating equations together, and solve

SUR Kernel Methods It is well known that the GLS spline has an exact, analytic expression We have shown that the SUR kernel method has an exact, analytic expression Both methods are linear in the responses Relatively nontrivial calculations show that Silverman’s result still holds Splines = SUR Kernels

Nonlocality The lack of locality of GLS splines and SUR kernels is surprising Suppose we want to estimate the function at t Result: All observations in a cluster contribute to the fit, not just those with covariates near t Locality: Defined at the cluster level

Nonlocality Wang’s SUR kernels = pseudo kernels with a clever linear transformation. Let SUR kernels are working independence kernels

Locality of Splines Splines = SUR kernels (Silverman-type result) GLS spline: Iterative standard independent spline smoothing SUR pseudo-observations at each iteration GLS splines are not local GLS splines are local in (the same!) pseudo- observations

Time Series Problems Time series problems: many of the same issues arise Original pseudo-observation method Two stages Linear transformation Mean  (X) Independent errors Single standard kernel applied Potential for great gains in efficiency (even infinite for AR problems with large correlation)

Time Series: AR(1) Illustration, First Pseudo Observation Method AR(1), correlation  : Regress Y t 0 on X t

Time Series Problems More efficient methods can be constructed Series of regression problems: for all j, Pseudo observations Mean White noise errors Regress for each j: fits are asymptotically independent Then weighted average Time series version of SUR-kernels for longitudinal data?

Time Series: AR(1) Illustration, New Pseudo Observation Method AR(1), correlation  : Regress Y t 0 on X t and Y t 1 on X t-1 Weights: 1 and  2

Time Series Problems AR(1) errors with correlation  Efficiency of original pseudo-observation method to working independence: Efficiency of new (SUR?) pseudo-observation method to original method: 36

The Semiparametric Model Y = Response X,Z = time-varying covariates Question: can we improve efficiency for  by accounting for correlation?

Profile Methods Given , solve for  say Basic idea: Regress Working independence Standard kernels Pseudo –observations kernels SUR kernels

Profile Methods Given , solve for  say Then fit GLS or W.I. to the model with mean Question: does it matter what kernel method is used? Question: How bad is using W.I. everywhere? Question: are there efficient choices?

The Semiparametric Model: Special Case If X does not vary with time, simple semiparametric efficient method available The basic point is that has common mean and covariance matrix If were a polynomial, GLS likelihood methods would be natural

The Semiparametric Model: Special Case Method: Replace polynomial GLS likelihood with GLS local likelihood with weights Then do GLS on the derived variable Semiparametric efficient

Profile Method: General Case Given , solve for  say Then fit GLS or W.I. to the model with mean In this general case, how you estimate  matters Working independence Standard kernel Pseudo-observation kernel SUR kernel

Profile Methods In this general case, how you estimate  matters Working independence Standard kernel Pseudo-observation kernel SUR kernel The SUR method leads to the semiparametric efficient estimate So too does the GLS spline

Age # of Smokes Drug Use? # of Partners Depression? Longitudinal CD4 Count Data (Zeger and Diggle) Working Independence Est. s.e. Semiparametric GLS Z-D Semiparametric GLS refit

Conclusions (1/3): Nonparametric Regression In nonparametric regression Kernels = splines for working independence (W.I.) Working independence is inefficient Standard kernels splines for correlated data

Conclusions (2/3): Nonparametric Regression In nonparametric regression Pseudo-observation methods improve upon working independence SUR kernels = splines for correlated data Splines and SUR kernels are not local Splines and SUR kernels are local in pseudo- observations

Conclusions (3/3): Semiparametric Regression In semiparametric regression Profile methods are a general class Fully efficient parameter estimates are easily constructed if X is not time-varying When X is time-varying, method of estimating affects properties of parameter estimates Using SUR kernels or GLS splines as the nonparametric method leads to efficient results Conclusions can change between working independence and semiparametric GLS

Conclusions: Splines versus Kernels One has to be struck by the fact that all the grief in this problem has come from trying to define kernel methods At the end of the day, they are no more efficient than splines, and harder and more subtle to define Showing equivalence as we have done suggests the good properties of splines

The decrease in s.e.’s is in accordance with our theory. The other phenomena are more difficult to explain. Nonetheless, they are not unique to semiparametric GEE methods. Similar discrepant outcomes occurred in parametric GEE estimation in which  (t) was replaced by a cubic regression function in time. Furthermore, we simulated data using the observed covariates but having responses generated from the multivariate normal with mean equal to the fitted mean in the parametric correlated GEE estimation, and with correlation given by Zeger and Diggle. The level of divergence between two sets of results in the simulated data was fairly consistent with what appeared in the Table. For example, among the first 25 generated data sets, 3 had different signs in sex partners and 7 had the scale of drug use coefficient obtained by WI 1.8 times or larger than what was obtained by the proposed method. The Numbers in the Table