CSCI 5822 Probabilistic Models of Human and Machine Learning

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Generative Models Thus far we have essentially considered techniques that perform classification indirectly by modeling the training data, optimizing.
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Gaussian Mixture.
INTRODUCTION TO MACHINE LEARNING Bayesian Estimation.
Polynomial Curve Fitting BITS C464/BITS F464 Navneet Goyal Department of Computer Science, BITS-Pilani, Pilani Campus, India.
Biointelligence Laboratory, Seoul National University
Pattern Recognition and Machine Learning
Kriging.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Visual Recognition Tutorial
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Today Linear Regression Logistic Regression Bayesians v. Frequentists
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Arizona State University DMML Kernel Methods – Gaussian Processes Presented by Shankar Bhargav.
Study of Sparse Online Gaussian Process for Regression EE645 Final Project May 2005 Eric Saint Georges.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Learning In Bayesian Networks. Learning Problem Set of random variables X = {W, X, Y, Z, …} Training set D = { x 1, x 2, …, x N }  Each observation specifies.
CSC2535: 2013 Advanced Machine Learning Lecture 3a: The Origin of Variational Bayes Geoffrey Hinton.
Spline and Kernel method Gaussian Processes
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
More Machine Learning Linear Regression Squared Error L1 and L2 Regularization Gradient Descent.
Gaussian process regression Bernád Emőke Gaussian processes Definition A Gaussian Process is a collection of random variables, any finite number.
Cao et al. ICML 2010 Presented by Danushka Bollegala.
Gaussian process modelling
PATTERN RECOGNITION AND MACHINE LEARNING
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
Machine Learning CUNY Graduate Center Lecture 4: Logistic Regression.
Randomized Algorithms for Bayesian Hierarchical Clustering
Gaussian Processes Li An Li An
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Sparse Kernel Methods 1 Sparse Kernel Methods for Classification and Regression October 17, 2007 Kyungchul Park SKKU.
Lecture 2: Statistical learning primer for biologists
Dropout as a Bayesian Approximation
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Gaussian Processes For Regression, Classification, and Prediction.
Bias and Variance of the Estimator PRML 3.2 Ethem Chp. 4.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
Introduction to Gaussian Process CS 478 – INTRODUCTION 1 CS 778 Chris Tensmeyer.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
RECITATION 2 APRIL 28 Spline and Kernel method Gaussian Processes Mixture Modeling for Density Estimation.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Oliver Schulte Machine Learning 726
CEE 6410 Water Resources Systems Analysis
Deep Feedforward Networks
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
ICS 280 Learning in Graphical Models
Non-Parametric Models
Bayes Net Learning: Bayesian Approaches
Special Topics In Scientific Computing
CSCI 5822 Probabilistic Models of Human and Machine Learning
CSCI 5822 Probabilistic Models of Human and Machine Learning
Filtering and State Estimation: Basic Concepts
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CSCI 5822 Probabilistic Models of Human and Machine Learning
Pattern Recognition and Machine Learning
Parametric Methods Berlin Chen, 2005 References:
Machine learning overview
Learning From Observed Data
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Probabilistic Surrogate Models
Presentation transcript:

CSCI 5822 Probabilistic Models of Human and Machine Learning Mike Mozer Department of Computer Science and Institute of Cognitive Science University of Colorado at Boulder

Fall Course

Gaussian Processes For Regression, Classification, and Prediction

How Do We Deal With Many Parameters, Little Data? 1. Regularization e.g., smoothing, L1 penalty, drop out in neural nets, large K for K-nearest neighbor 2. Standard Bayesian approach specify probability of data given weights, P(D|W) specify weight priors given hyperparameter α, P(W|α) find posterior over weights given data, P(W|D, α) With little data, strong weight prior constrains inference 3. Gaussian processes place a prior over functions directly rather than over model parameters, W

Problems That GPs Solve Overfitting In problems with limited data, the data themselves aren’t sufficient to constrain the model predictions Uncertainty in prediction Want to not only predict test values, but also estimate uncertainty in the predictions Provide a convenient language for expressing domain knowledge Prior is specified over function space Intuitions are likely to be stronger in function space than parameter space E.g., smoothness constraints on function, trends over time, etc.

Intuition Prior: all smooth functions are allowed Grey area: plausible value of f(x) Observe 2 data points -> only allowed functions are those that are consistent with the data End up with a posterior over functions. Kind of like throwing out all functions that don’t go through the data points. Posterior uncertainty shrinks near observations. Why? smoothness of functions – points near the observations need to have values similar to those of the observations Figures from Rasmussen & Williams (2006) Gaussian Processes for Machine Learning

Demo http://chifeng.scripts.mit.edu/stuff/gp-demo/

A Two-Dimensional Input Space Kriging / Shane’s work

Important Ideas Nonparametric model complexity of model posterior can grow as more data are added compare to polynomial regression Prior determined by one or more smoothness parameters We will learn these parameters from the data

Gaussian Distributions: A Reminder Figures stolen from Rasmussen NIPS 2006 tutorial

Gaussian Distributions: A Reminder Marginals of multivariate Gaussians are Gaussian Figure stolen from Rasmussen NIPS 2006 tutorial

Gaussian Distributions: A Reminder Conditionals of multivariate Gaussians are Gaussian

Stochastic Process Generalization of a multivariate distribution to countably infinite number of dimensions Collection of random variables G(x) indexed by x E.g., Dirichlet process x: parameters of mixture component x: possible word or topic in topic models Joint probability over any subset of values of x is Dirichlet P(G(x1), G(x2), …, G(xk)) ~ Dirichlet(.)

Gaussian Process Infinite dimensional Gaussian distribution, F(x) Draws from Gaussian Process are functions f ~ GP(.) notation Switch G0 -> F G -> f G(x) -> f(x) Notation switch: F instead of G, f(x) instead of G(x) f x

Gaussian Process Joint probability over any subset of x is Gaussian P(f(x1), f(x2), …, f(xk)) ~ Gaussian(.) Consider relation between 2 points, x1 and x2 f(x1) constrains the value of f(x2) f x1 x2 x

Gaussian Process How do points covary as a function of their distance? x1 x2 f x x x1 x2

From Gaussian Distribution To Gaussian Process A multivariate Gaussian distribution is defined by a mean vector and a covariance matrix. A Gaussian process is fully defined by a mean function and a covariance function Example Gaussian distribution squared exponential covariance l: length scale

Inference GP specifies a prior over functions, F(x) Suppose we have a set of observations: D = {(x1,y1), (x2, y2), (x3, y3), …, (xn, yn)} Bayesian approach p(F|D) ~ p(D|F) p(F) If data are noise free P(D|F) = 0 if any P(D|F) = 1 otherwise

Graphical Model Depiction of Gaussian Process Inference square boxes are observed Red = training data Blue = test data observed unobserved train test

GP Posterior Predictive Distribution GP Prior where GP Posterior Predictive Distribution X: set of training points X*: set of test points all points need to be jointly Gaussian -- training and test decompose covariance matrix into X and X* Posterior makes a prediction for each value along with the _Gaussian_ uncertainty Analytical form because GP prior + gaussian likelihoods -> Gaussian process posterior Marginalize to get Gaussian posterior over set of points

Observation Noise What if Covariance function becomes Posterior predictive distribution becomes

What About The Length Scale? Note: uncertainty grows rapidly with distance from data points with short length scale

Full-Blown Squared-Exponential Covariance Function Has Three Parameters How do we determine parameters? Prior knowledge Maximum likelihood Bayesian inference Potentially many other free parameters e.g., mean function QUIZ: what will sigma_f do? sigma_f determines range of y values, e.g., all values in [-1, +1] vs. all values in [-100, +100] Bayesian inference is nice because you can impose prior on parameters, but it’s typically very expensive

Many Forms For Covariance Function From GPML toolbox

Many Forms For Likelihood Function What is likelihood function? noise free case we discussed noisy observation case we discussed With these likelihoods, inference is exact Gaussian process is conjugate prior of Gaussian likelihood (Works for noisy observations because noise is incorporated into covariance function.)

Customizing Likelihood Function To Problem Suppose that instead of real valued observations y, observations are binary {0, 1}, e.g., class labels

Many Approximate Inference Techniques For GPs Choice depends on likelihood function From GPML toolbox

What Do GPs Buy Us? Choice of covariance and mean functions seem a good way to incorporate prior knowledge e.g., periodic covariance function As with all Bayesian methods, greatest benefit is when problem is data limited Predictions with error bars An elegant way of doing global optimization…