Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
Notes Sample vs distribution “m” vs “µ” and “s” vs “σ” Bias/Variance Bias: Measures how much the learnt model is wrong disregarding noise Variance: Measures.
Pattern Recognition and Machine Learning
CS479/679 Pattern Recognition Dr. George Bebis
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Protein Fold Recognition with Relevance Vector Machines Patrick Fernie COMS 6772 Advanced Machine Learning 12/05/2005.
Computer vision: models, learning and inference Chapter 8 Regression.
Model Assessment, Selection and Averaging
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
A Bayesian Approach to Joint Feature Selection and Classifier Design Balaji Krishnapuram, Alexander J. Hartemink, Lawrence Carin, Fellow, IEEE, and Mario.
Sparse vs. Ensemble Approaches to Supervised Learning
Extending Expectation Propagation for Graphical Models Yuan (Alan) Qi Joint work with Tom Minka.
Announcements  Homework 4 is due on this Thursday (02/27/2004)  Project proposal is due on 03/02.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Machine Learning CMPT 726 Simon Fraser University
Collaborative Ordinal Regression Shipeng Yu Joint work with Kai Yu, Volker Tresp and Hans-Peter Kriegel University of Munich, Germany Siemens Corporate.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
DIMACS Workshop on Machine Learning Techniques in Bioinformatics 1 Cancer Classification with Data-dependent Kernels Anne Ya Zhang (with Xue-wen.
Bayesian Learning for Conditional Models Alan Qi MIT CSAIL September, 2005 Joint work with T. Minka, Z. Ghahramani, M. Szummer, and R. W. Picard.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
PATTERN RECOGNITION AND MACHINE LEARNING
Efficient Model Selection for Support Vector Machines
沈致远. Test error(generalization error): the expected prediction error over an independent test sample Training error: the average loss over the training.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
Reduced the 4-class classification problem into 6 pairwise binary classification problems, which yielded the conditional pairwise probability estimates.
Extending Expectation Propagation on Graphical Models Yuan (Alan) Qi
High-resolution computational models of genome binding events Yuan (Alan) Qi Joint work with Gifford and Young labs Dana-Farber Cancer Institute Jan 2007.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Virtual Vector Machine for Bayesian Online Classification Yuan (Alan) Qi CS & Statistics Purdue June, 2009 Joint work with T.P. Minka and R. Xiang.
An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators by Percy Liang and Michael Jordan (ICML 2008 ) Presented by Lihan.
Randomized Algorithms for Bayesian Hierarchical Clustering
Christopher M. Bishop, Pattern Recognition and Machine Learning.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
Extending Expectation Propagation on Graphical Models Yuan (Alan) Qi MIT Media Lab.
Some Aspects of Bayesian Approach to Model Selection Vetrov Dmitry Dorodnicyn Computing Centre of RAS, Moscow.
Biointelligence Laboratory, Seoul National University
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Lecture 2: Statistical learning primer for biologists
Dropout as a Bayesian Approximation
Applications of Supervised Learning in Bioinformatics Yen-Jen Oyang Dept. of Computer Science and Information Engineering.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
Sparse Approximate Gaussian Processes. Outline Introduction to GPs Subset of Data Bayesian Committee Machine Subset of Regressors Sparse Pseudo GPs /
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Institute of Statistics and Decision Sciences In Defense of a Dissertation Submitted for the Degree of Doctor of Philosophy 26 July 2005 Regression Model.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
Bayesian Conditional Random Fields using Power EP Tom Minka Joint work with Yuan Qi and Martin Szummer.
3. Linear Models for Regression 後半 東京大学大学院 学際情報学府 中川研究室 星野 綾子.
Predictive Automatic Relevance Determination by Expectation Propagation Y. Qi T.P. Minka R.W. Picard Z. Ghahramani.
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
CEE 6410 Water Resources Systems Analysis
Probability Theory and Parameter Estimation I
Sparse Kernel Machines
CH 5: Multivariate Methods
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Gene Expression Classification
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Recognition and Machine Learning
Extending Expectation Propagation for Graphical Models
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
MAS 622J Course Project Classification of Affective States - GP Semi-Supervised Learning, SVM and kNN Hyungil Ahn
Presentation transcript:

Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani

Motivation Task 1: Classify high dimensional datasets with many irrelevant features, e.g., normal v.s. cancer microarray data. Task 2: Sparse Bayesian kernel classifiers for fast test performance.

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation Experiments Conclusions

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation Experiments Conclusions

Bayesian Classification Model Prior of the classifier w: Labels: t inputs: X parameters: w Likelihood for the data set: Where is a cumulative distribution function for a standard Gaussian.

Evidence and Predictive Distribution The evidence, i.e., the marginal likelihood of the hyperparameters : The predictive posterior distribution of the label for a new input :

Automatic Relevance Determination (ARD) Give the classifier weight independent Gaussian priors whose variance,, controls how far away from zero each weight is allowed to go: Maximize, the marginal likelihood of the model, with respect to. Outcome: many elements of go to infinity, which naturally prunes irrelevant features in the data.

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation –Sequential update Experiments Conclusion

Two Types of Overfitting Classical Maximum likelihood: –Optimizing the classifier weights w can directly fit noise in the data, resulting in a complicated model. Type II Maximum likelihood (ARD): –Optimizing the hyperparameters corresponds to choosing which variables are irrelevant. Choosing one out of exponentially many models can also overfit if we maximize the model marginal likelihood.

Risk of Optimizing X: Class 1 vs O: Class 2 Evd-ARD-1 Evd-ARD-2 Bayes Point

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation Experiments Conclusions

Predictive-ARD Choosing the model with the best estimated predictive performance instead of the most probable model. Expectation propagation (EP) estimates the leave-one-out predictive performance without performing any expensive cross-validation.

Estimate Predictive Performance Predictive posterior given a test data point EP can estimate predictive leave-one-out error probability where q ( w| t \ i ) is the approximate posterior of leaving out the i th label. EP can also estimate predictive leave-one-out error count

Expectation Propagation in a Nutshell Approximate a probability distribution by simpler parametric terms: Each approximation term lives in an exponential family (e.g. Gaussian)

EP in a Nutshell Three key steps: Deletion Step: approximate the “leave-one-out” predictive posterior for the i th point: Minimizing the following KL divergence by moment matching: Inclusion: The key observation: we can use the approximate predictive posterior, obtained in the deletion step, for model selection. No extra computation!

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation Experiments Conclusions

Comparison of different model selection criteria for ARD training 1 st row: Test error 2 nd row: Estimated leave-one-out error probability 3 rd row: Estimated leave-one-out error counts 4 th row: Evidence (Model marginal likelihood) 5 th row: Fraction of selected features The estimated leave-one-out error probabilities and counts are better correlated with the test error than evidence and sparsity level.

Gene Expression Classification Task: Classify gene expression datasets into different categories, e.g., normal v.s. cancer Challenge: Thousands of genes measured in the micro-array data. Only a small subset of genes are probably correlated with the classification task.

Classifying Leukemia Data The task: distinguish acute myeloid leukemia (AML) from acute lymphoblastic leukemia (ALL). The dataset: 47 and 25 samples of type ALL and AML respectively with 7129 features per sample. The dataset was randomly split 100 times into 36 training and 36 testing samples.

Classifying Colon Cancer Data The task: distinguish normal and cancer samples The dataset: 22 normal and 40 cancer samples with 2000 features per sample. The dataset was randomly split 100 times into 50 training and 12 testing samples. SVM results from Li et al. 2002

Bayesian Sparse Kernel Classifiers Using feature/kernel expansions defined on training data points: Predictive-ARD-EP trains a classifier that depends on a small subset of the training set. Fast test performance.

Test error rates and numbers of relevance or support vectors on breast cancer dataset. 50 partitionings of the data were used. All these methods use the same Gaussian kernel with kernel width = 5. The trade-off parameter C in SVM is chosen via 10- fold cross-validation for each partition.

Test error rates on diabetes data 100 partitionings of the data were used. Evidence and Predictive ARD-EPs use the Gaussian kernel with kernel width = 5.

Outline Background –Bayesian classification model –Automatic relevance determination (ARD) Risk of Overfitting by optimizing hyperparameters Predictive ARD by expectation propagation (EP): –Approximate prediction error –EP approximation –Sequential update Experiments Conclusions

Maximizing marginal likelihood can lead to overfitting in the model space if there are a lot of features. We propose Predictive-ARD based on EP for –feature selection –sparse kernel learning In practice Predictive-ARD works better than traditional ARD.

Appendix: Sequential Updates EP approximates true likelihood terms by Gaussian virtual observations. Based on Gaussian virtual observations, the classification model becomes a regression model. Then, we can achieve efficient sequential updates without maintaining and updating a full covariance matrix. (Faul & Tipping 02)