Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich.

Similar presentations


Presentation on theme: "1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich."— Presentation transcript:

1 1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich Zemel

2 2 Outline Introduction Missing Data Theory and EM Synthetic Data Experiments Extensions and Future Work Conclusions Models for Non-Ignorable Missing Data Real Data Experiments

3 3 Introduction The Problem of Missing Data Missing data is a pervasive problem in machine learning and statistical data analysis. Most large, complex data sets will be certain amount of missing data. A fundamental question in the analysis of missing data is why is the data missing and what do we have to do about it?. There are extreme examples of data sets in machine learning with upwards of 95% missing data (EachMovie).

4 4 Introduction A Theory of Missing Data Little and Rubin laid out a theory of missing data several decades ago that provides answers to these questions. They describe a classification of missing data in terms of the mechanism, or process that causes the data to be missing. ie: the generative model for missing data. They also derive the exact conditions outlining when missing data must be treated specially to obtain correct inferences based on likelihood.

5 5 Introduction Types of Missing Data: MCAR If the missing data can be explained by a simple random process like flipping a single biased coin, the missing data is missing completely at random. 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 1 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 214 354 215 1 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 214 354 215 1 5 2 3 2 1 5 3 4 5 2 3 3 5 6 1132 3 2 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 14 354 215 1 5 3 2 1 5 3 4 5 2 3 3 5 6 1132 3 2 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 14 34 215 1 5 3 2 1 5 3 4 5 2 3 3 5 6 1132 3 2 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 14 34 215 1 5 3 2 1 5 3 4 5 2 3 3 5 6 11322 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 24 151 14 34 25 1 5 3 2 1 5 3 4 5 2 3 3 5 6 11322 5

6 6 Introduction Types of Missing Data: MAR If the probability that a data entry is missing depends only on the data entries that are observed, then the data is missing at random. 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 1 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 2 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 3 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 3 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 2 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 3 2 1 5 3 4 5 2 5 3 3 5 6 113 3 2 5 5 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 215 5 3 2 1 5 3 4 5 2 5 3 3 5 6 113 3 2 5

7 7 Introduction Types of Missing Data: Non-Ignorable If the probability that a data entry is missing depends on the value of that data entry, then the missing data is non- ignorable. 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 241 151 214 354 214 1 5 2 3 2 1 5 3 4 5 2 5 3 3 5 6 1132 3 2 5 2 23 4 5 1 Attributes 2 3 4 6 1 Data Cases 21 11 21 35 214 2 3 2 1 3 2 5 3 3 6 1132 3 5 13 45

8 8 Introduction The Effect of Missing Data If missing data is MCAR or MAR, then inference based on the observed data likelihood will not be biased. If missing data is non-ignorable, then inference based on the observed data likelihood is provably biased. 465 458 3 5 6 3 467 562 4 6 4 5 Data: MCAR: NI: 465 58 3 5 7 2 4 4.90 45 3 5 3 46 52 4 4 4.10 Mean

9 9 Introduction Unsupervised Learning and Missing Data This simple mean estimation problem can be interpreted as fitting a normal distribution to the data, a simple unsupervised learning problem. Just like the mean estimation example, any unsupervised learning algorithm that treats non-ignorable missing data as missing at random will learn biased estimates of model parameters.

10 10 Introduction Research Overview The goals of this research project are: 1. Apply the theory developed by Little and Rubin to extend the standard unsupervised learning framework to correctly handle non-ignorable missing data. 2. Apply this extended framework to augment a variety of existing models, and show that tractable learning algorithms can be obtained. 3. Demonstrate that these augmented models out perform standard models on tasks where missing data is believed to be non-ignorable.

11 11 Introduction Research Overview The current status of the project: 1. We have been able to augment mixture models to account for non-ignorable missing data. 2. We have derived efficient learning and exact inference algorithms for the augmented models. 3. We have obtained empirical results on synthetic data sets showing the augmented models learn accurately. 4. Preliminary results were recently submitted to AISTATS.

12 12 Missing Data Theory and EM Notation Complete data matrix. Observed elements of the data matrix. Missing elements of the data matrix. Matrix of response indicators. Data model. Selection or observation model.

13 13 Missing Data Theory and EM The MAR Assumption Under this notation the MAR assumption can be expressed as follows: Basically this says the distribution over the response indicators is independent of the missing data.

14 14 Missing Data Theory and EM Observed and Full Likelihood Functions The standard procedure for unsupervised learning is to maximize the observed data likelihood. The correct procedure is maximize the full data likelihood.

15 15 Missing Data Theory and EM Expectation Maximization Algorithm In an unsupervised learning setting with non-ignorable missing data, the correct learning procedure is to maximize the expected full log likelihood.

16 16 Models for Non-Ignorable Missing Data Review: Standard Mixture Model In the work that follows we assume a multinomial mixture model as the data model. It is a simple baseline model that is quite effective in many discrete domains. Y 1n Y 2n Y 3n Y Mn ZnZn   n=1:N Latent variable for case n. Data variables for case n.

17 17 n=1:N Models for Non-Ignorable Missing Data Mixture/Fully Connected Model If we fully connect the response indicators to the data variables we get the most general selection mode, but it is not tractable. Y mn R mn ZnZn    Latent variable Data variables Response indicators m=1:M

18 18 Models for Non-Ignorable Missing Data Mixture/CPT-v Model To derive tractable learning and inference algorithms we need to assert further independence relations. Latent variable Data variables Response indicators n=1:N Y mn R mn ZnZn    m=1:M

19 19 Models for Non-Ignorable Missing Data Mixture/CPT-v Model Exact inference and learning for the Mixture/CPT-v model is only slightly more complex than in a standard mixture model.

20 20 Models for Non-Ignorable Missing Data Mixture/LOGIT-v,mz Model The LOGIT-v,mz model assumes a functional form for the missing data parameters. It is able to model a wider range of effects. Latent variable Data variables Response indicators n=1:N Y mn R mn ZnZn    m=1:M

21 21 Models for Non-Ignorable Missing Data Mixture/LOGIT-v,mz Model Exact inference is still possible, but learning requires gradient based techniques for  and .

22 22 Synthetic Data Experiments Experimental Procedure 1.Sample mixture model parameters from Dirichlet priors. 2.Sample 5000 complete data cases from the mixture model. 3.Apply each missing data effect and resample complete data to obtain observed data. 4.Train each model on observed data only. 5.Measure prediction error on complete data set.

23 23 Synthetic Data Experiments Experiment 1: CPT-v Missing Data

24 24 Synthetic Data Experiments Experiment 1: Results

25 25 Synthetic Data Experiments Experiment 2: LOGIT-v,mz Missing Data Value Based EffectItem/Latent Variable Effect

26 26 Synthetic Data Experiments Experiment 2: Results

27 27 Real Data Experiments Experimental Procedure 1.Train LOGIT-v,mz model on observed data. 2.Look at parameters and full likelihood values after training.

28 28 Real Data Experiments Data Sets EachMovie Collaborative Filtering Data Set: Base: 2.8M Ratings, 73K users, 1.6K movies, 97.6% missing Filtering: Min 20 ratings per user. Train: 2.1M Ratings, 30K Users, 95.6% missing Jester Collaborative Filtering Data Set : Base: 900K Ratings, 17K users, 100 jokes, 50.4% missing Filtering: Continuous –10 to +10 scale to discrete 5 point scale.

29 29 Real Data Experiments Results – Marginal Selection Probabilities

30 30 Real Data Experiments Results – Full Data Log Likelihood JesterEachMovie LOGIT-v,mz-1.83036x10 6 -8.75037x10 6 MCAR MM-2.48498x10 6 -1.16489x10 7

31 31 Conclusions Summary and Future Work We have shown positive preliminary results on synthetic data with both the CPT-v, and that the LOGIT-v,mz model. We have shown that the LOGIT-v,mz model does something reasonable on real data. To show some convincing results on real data we need to look at new procedures for collect data, and possibly new experimental procedures for validating model under this framework. We have proposed a framework for dealing with non-ignorable missing data by augmenting existing models with a general selection model.

32 32 The End


Download ppt "1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich."

Similar presentations


Ads by Google