Bayesian Factor Regression Models in the “Large p, Small n” Paradigm Mike West, Duke University Presented by: John Paisley Duke University.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Component Analysis (Review)
1 Latent Semantic Mapping: Dimensionality Reduction via Globally Optimal Continuous Parameter Modeling Jerome R. Bellegarda.
Dimensionality Reduction PCA -- SVD
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Yanxin Shi 1, Fan Guo 1, Wei Wu 2, Eric P. Xing 1 GIMscan: A New Statistical Method for Analyzing Whole-Genome Array CGH Data RECOMB 2007 Presentation.
A Bayesian Approach to Joint Feature Selection and Classifier Design Balaji Krishnapuram, Alexander J. Hartemink, Lawrence Carin, Fellow, IEEE, and Mario.
Sandrine Dudoit1 Microarray Experimental Design and Analysis Sandrine Dudoit jointly with Yee Hwa Yang Division of Biostatistics, UC Berkeley
Principal Component Analysis
Dimension reduction : PCA and Clustering Agnieszka S. Juncker Slides: Christopher Workman and Agnieszka S. Juncker Center for Biological Sequence Analysis.
Unsupervised Learning - PCA The neural approach->PCA; SVD; kernel PCA Hertz chapter 8 Presentation based on Touretzky + various additions.
Lecture 19 Quadratic Shapes and Symmetric Positive Definite Matrices Shang-Hua Teng.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
‘Gene Shaving’ as a method for identifying distinct sets of genes with similar expression patterns Tim Randolph & Garth Tan Presentation for Stat 593E.
Information Retrieval in Text Part III Reference: Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval.
Predictive sub-typing of subjects Retrospective and prospective studies Exploration of clinico-genomic data Identify relevant gene expression patterns.
1/ 30. Problems for classical IR models Introduction & Background(LSI,SVD,..etc) Example Standard query method Analysis standard query method Seeking.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
The Chicken Project Dimension Reduction-Based Penalized logistic Regression for cancer classification Using Microarray Data By L. Shen and E.C. Tan Name.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
DIMACS Workshop on Machine Learning Techniques in Bioinformatics 1 Cancer Classification with Data-dependent Kernels Anne Ya Zhang (with Xue-wen.
Reverse engineering gene networks using singular value decomposition and robust regression M.K.Stephen Yeung Jesper Tegner James J. Collins.
Multivariate Data and Matrix Algebra Review BMTRY 726 Spring 2012.
Factor Analysis Psy 524 Ainsworth.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Missing value estimation methods for DNA microarrays
Classification (Supervised Clustering) Naomi Altman Nov '06.
Molecular Diagnosis Florian Markowetz & Rainer Spang Courses in Practical DNA Microarray Analysis.
ArrayCluster: an analytic tool for clustering, data visualization and module finder on gene expression profiles 組員:李祥豪 謝紹陽 江建霖.
Sample classification using Microarray Data. AB We have two sample entities malignant vs. benign tumor patient responding to drug vs. patient resistant.
Epigenetic Analysis BIOS Statistics for Systems Biology Spring 2008.
1 Effective Feature Selection Framework for Cluster Analysis of Microarray Data Gouchol Pok Computer Science Dept. Yanbian University China Keun Ho Ryu.
Basic Concepts of Correlation. Definition A correlation exists between two variables when the values of one are somehow associated with the values of.
Distances Between Genes and Samples Naomi Altman Oct. 06.
GeneGroupAnalysis BetaCS Alejandro Quiroz-Zárate John Quackenbush lab
Blind Information Processing: Microarray Data Hyejin Kim, Dukhee KimSeungjin Choi Department of Computer Science and Engineering, Department of Chemical.
Support Vector Machine Data Mining Olvi L. Mangasarian with Glenn M. Fung, Jude W. Shavlik & Collaborators at ExonHit – Paris Data Mining Institute University.
Sparse Bayesian Learning for Efficient Visual Tracking O. Williams, A. Blake & R. Cipolloa PAMI, Aug Presented by Yuting Qi Machine Learning Reading.
A Comparison of General v. Specific Measures of Achievement Goal Orientation Lisa Baranik, Kenneth Barron, Sara Finney, and Donna Sundre Motivation Research.
The Infinite Hierarchical Factor Regression Model Piyush Rai and Hal Daume III NIPS 2008 Presented by Bo Chen March 26, 2009.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
CpSc 881: Machine Learning
Ridge Regression: Biased Estimation for Nonorthogonal Problems by A.E. Hoerl and R.W. Kennard Regression Shrinkage and Selection via the Lasso by Robert.
A Kernel Approach for Learning From Almost Orthogonal Pattern * CIS 525 Class Presentation Professor: Slobodan Vucetic Presenter: Yilian Qin * B. Scholkopf.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
APPLICATIONS OF DIRICHLET PROCESS MIXTURES TO SPEAKER ADAPTATION Amir Harati and Joseph PiconeMarc Sobel Institute for Signal and Information Processing,
Global predictors of regression fidelity A single number to characterize the overall quality of the surrogate. Equivalence measures –Coefficient of multiple.
PRESENT BY BING-HSIOU SUNG A Multilinear Singular Value Decomposition.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Unsupervised Learning II Feature Extraction
Institute of Statistics and Decision Sciences In Defense of a Dissertation Submitted for the Degree of Doctor of Philosophy 26 July 2005 Regression Model.
Dimension reduction (2) EDR space Sliced inverse regression Multi-dimensional LDA Partial Least Squares Network Component analysis.
Lab 5 Unsupervised and supervised clustering Feb 22 th 2012 Daniel Fernandez Alejandro Quiroz.
1 Dongheng Sun 04/26/2011 Learning with Matrix Factorizations By Nathan Srebro.
Canadian Bioinformatics Workshops
Predictive Automatic Relevance Determination by Expectation Propagation Y. Qi T.P. Minka R.W. Picard Z. Ghahramani.
Some statistical musings Naomi Altman Penn State 2015 Dagstuhl Workshop.
Bayesian Semi-Parametric Multiple Shrinkage
Statistical Applications in Biology and Genetics
CH 5: Multivariate Methods
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Rainer Spang, Max Planck Institute for Molecular Genetics, Berlin
Rainer Spang, Max Planck Institute for Molecular Genetics, Berlin
Biointelligence Laboratory, Seoul National University
6.1 Introduction to Chi-Square Space
Linear Algebra review (optional)
Label propagation algorithm
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Bayesian Factor Regression Models in the “Large p, Small n” Paradigm Mike West, Duke University Presented by: John Paisley Duke University

Outline Empirical Factor Regression (SVD) Latent Factor Regression Sparse Factor Regression

Linear Regression & Empirical Factor Regression Linear Regression SVD Regression D is a diagonal matrix of singular values

Empirical Factor Regression By definition, Regression is now done in factor space using generalized shrinkage (ridge regression) priors on, e.g. RVM Problem of inversion:has many-to-one mapping is canonical “least-norm” inverse

Example: Biscuit Dough Data NIR spectroscopy reflectance values are predictors Response is fat content of dough samples 39 training, 39 testing: data are pooled and testing data responses treated as missing values to be imputed Top 16 factors used, based on size of singular values

Example: Biscuit Dough Data (2) Left: Fitted and predicted vs true values Right: Least-norm inverse of beta ~ 1700 nm range is absorbance region for fat As can be seen, solution is not sparse

Latent Factor Regression Loosen to Under proper constraints on B, this finds common structure in X and isolates idiosyncrasies to noise Now, variation in X has less effect on y The implied prior is  When variance, Phi  0, this reverts to empirical linear regression

Sparse Latent Factor Regression WRT gene expression profiling, “multiple biological factors underlie patterns of gene expression variation, so latent factor approaches are natural – we imagine that latent factors reflect individual biological functions… This is a motivating context for sparse models.” Columns of B represents the genes involved in a particular biological factor. Rows of B represent a particular gene’s involvement across biological factors.

Example: Gene Expression Data p = 6128 genes measured using Affymetrix DNA microarrays n = 49 breast cancer tumor samples k = 25 factors Factor 3 separates by red: estrogen receptor positive tumors blue: ER negative

Example: Gene Expression Data Comparison with results obtained using empirical SVD factors

Conclusion Sparse factor regression modeling is a promising framework for dimensionality reduction of predictors. Only those factors that are relevant (e.g. factor 3) are of interest. Therefore, only those genes with non-zero values in that column of B are meaningful.