Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/22/11.

Slides:



Advertisements
Similar presentations
Image Modeling & Segmentation
Advertisements

Mixture Models and the EM Algorithm
Clustering Beyond K-means
Expectation Maximization
Maximum Likelihood And Expectation Maximization Lecture Notes for CMPUT 466/551 Nilanjan Ray.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Segmentation and Fitting Using Probabilistic Methods
Machine Learning and Data Mining Clustering
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 143, Brown James Hays 02/22/11 Many slides from Derek Hoiem.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/12.
Visual Recognition Tutorial
EE-148 Expectation Maximization Markus Weber 5/11/99.
Midterm Review. The Midterm Everything we have talked about so far Stuff from HW I won’t ask you to do as complicated calculations as the HW Don’t need.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Unsupervised Learning: Clustering Rong Jin Outline  Unsupervised learning  K means for clustering  Expectation Maximization algorithm for clustering.
Lecture 5: Learning models using EM
Most slides from Expectation Maximization (EM) Northwestern University EECS 395/495 Special Topics in Machine Learning.
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Announcements Project 2 more signup slots questions Picture taking at end of class.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Mixtures of Gaussians and Advanced Feature Encoding Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Florent Perronnin, and Hervé.
Expectation Maximization for GMM Comp344 Tutorial Kai Zhang.
Expectation-Maximization
Visual Recognition Tutorial
What is it? When would you use it? Why does it work? How do you implement it? Where does it stand in relation to other methods? EM algorithm reading group.
Image Segmentation Today’s Readings Forsyth & Ponce, Chapter 14
EM Algorithm Likelihood, Mixture Models and Clustering.
EM algorithm LING 572 Fei Xia 03/02/06. Outline The EM algorithm EM for PM models Three special cases –Inside-outside algorithm –Forward-backward algorithm.
Incomplete Graphical Models Nan Hu. Outline Motivation K-means clustering Coordinate Descending algorithm Density estimation EM on unconditional mixture.
Computer Vision James Hays, Brown
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/31/15.
Topic Models in Text Processing IR Group Meeting Presented by Qiaozhu Mei.
EM and expected complete log-likelihood Mixture of Experts
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/24/10.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Crowdsourcing with Multi- Dimensional Trust Xiangyang Liu 1, He He 2, and John S. Baras 1 1 Institute for Systems Research and Department of Electrical.
Lecture 19: More EM Machine Learning April 15, 2010.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
Bayesian Parameter Estimation Liad Serruya. Agenda Introduction Bayesian decision theory Scale-Invariant Learning Bayesian “One-Shot” Learning.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
HMM - Part 2 The EM algorithm Continuous density HMM.
Lecture 6 Spring 2010 Dr. Jianjun Hu CSCE883 Machine Learning.
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Lecture 2: Statistical learning primer for biologists
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
Flat clustering approaches
Week 41 How to find estimators? There are two main methods for finding estimators: 1) Method of moments. 2) The method of Maximum likelihood. Sometimes.
CSE 517 Natural Language Processing Winter 2015
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
Probability and Statistics in Vision. Probability Objects not all the sameObjects not all the same – Many possible shapes for people, cars, … – Skin has.
Learning Sequence Motifs Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 Mark Craven
Information Bottleneck versus Maximum Likelihood Felix Polyakov.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/27/12.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Lecture 18 Expectation Maximization
Classification of unlabeled data:
LECTURE 10: EXPECTATION MAXIMIZATION (EM)
Latent Variables, Mixture Models and EM
Bayesian Models in Machine Learning
Probabilistic Models with Latent Variables
Topic Models in Text Processing
EM Algorithm and its Applications
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/22/11

HW 1 is graded Mean = 93, Median = 98 A few comments – Diffuse component for estimating light color – Make sure to choose an appropriate size filter – Comparing frequencies for images at multiple scales – Wide variety of interesting apps Maybe some would make good final project?

HW 1: Estimating Camera/Building Height

Today’s Class Examples of Missing Data Problems – Detecting outliers – Latent topic models (HW 2, problem 3) – Segmentation (HW 2, problem 4) Background – Maximum Likelihood Estimation – Probabilistic Inference Dealing with “Hidden” Variables – EM algorithm, Mixture of Gaussians – Hard EM

Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Photo: Jam343 (Flickr) Annotator Ratings

Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of “visual words”. Challenge: Discover frequently occurring object categories, without pre-trained appearance models.

Missing Data Problems: Segmentation You are given an image and want to assign foreground/background pixels. Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Foreground Background

Missing Data Problems: Segmentation Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Three steps: 1.If we had labels, how could we model the appearance of foreground and background? 2.Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? 3.How can we get both labels and appearance models at once? Foreground Background

Maximum Likelihood Estimation 1.If we had labels, how could we model the appearance of foreground and background? Foreground Background

Maximum Likelihood Estimation data parameters

Maximum Likelihood Estimation Gaussian Distribution

Maximum Likelihood Estimation Gaussian Distribution

Example: MLE >> mu_fg = mean(im(labels)) mu_fg = >> sigma_fg = sqrt(mean((im(labels)-mu_fg).^2)) sigma_fg = >> mu_bg = mean(im(~labels)) mu_bg = >> sigma_bg = sqrt(mean((im(~labels)-mu_bg).^2)) sigma_bg = >> pfg = mean(labels(:)); labelsim fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 Parameters used to Generate

Probabilistic Inference 2.Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? Foreground Background

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label

Example: Inference >> pfg = 0.5; >> px_fg = normpdf(im, mu_fg, sigma_fg); >> px_bg = normpdf(im, mu_bg, sigma_bg); >> pfg_x = px_fg*pfg./ (px_fg*pfg + px_bg*(1-pfg)); im fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 Learned Parameters p(fg | im)

Dealing with Hidden Variables 3.How can we get both labels and appearance models at once? Foreground Background

Mixture of Gaussians mixture component component prior component model parameters

Mixture of Gaussians With enough components, can represent any probability density function – Widely used as general purpose pdf estimator

Segmentation with Mixture of Gaussians Pixels come from one of several Gaussian components – We don’t know which pixels come from which components – We don’t know the parameters for the components

Simple solution 1.Initialize parameters 2.Compute the probability of each hidden variable given the current parameters 3.Compute new parameters for each model, weighted by likelihood of hidden variables 4.Repeat 2-3 until convergence

Mixture of Gaussians: Simple Solution 1.Initialize parameters 2.Compute likelihood of hidden variables for current parameters 3.Estimate new parameters for each model, weighted by likelihood

Expectation Maximization (EM) Algorithm Goal: Jensen’s Inequality Log of sums is intractable See here for proof: for concave funcions, such as f(x)=log(x)

Expectation Maximization (EM) Algorithm 1.E-step: compute 2.M-step: solve Goal:

EM for Mixture of Gaussians (on board) 1.E-step: 2.M-step:

EM for Mixture of Gaussians (on board) 1.E-step: 2.M-step:

EM Algorithm Maximizes a lower bound on the data likelihood at each iteration Each step increases the data likelihood – Converges to local maximum Common tricks to derivation – Find terms that sum or integrate to 1 – Lagrange multiplier to deal with constraints

EM Demos Mixture of Gaussian demo Simple segmentation demo

“Hard EM” Same as EM except compute z* as most likely values for hidden variables K-means is an example Advantages – Simpler: can be applied when cannot derive EM – Sometimes works better if you want to make hard predictions at the end But – Generally, pdf parameters are not as accurate as EM

Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Photo: Jam343 (Flickr) Annotator Ratings

Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of “visual words”. Challenge: Discover frequently occurring object categories, without pre-trained appearance models.

n is number of elements in histogram

Next class MRFs and Graph-cut Segmentation

Missing data problems Outlier Detection – You expect that some of the data are valid points and some are outliers but… – You don’t know which are inliers – You don’t know the parameters of the distribution of inliers Modeling probability densities – You expect that most of the colors within a region come from one of a few normally distributed sources but… – You don’t know which source each pixel comes from – You don’t know the Gaussian means or covariances Discovering patterns or “topics” – You expect that there are some re-occuring “topics” of codewords within an image collection. You want to… – Figure out the frequency of each codeword within each topic – Figure out which topic each image belongs to

Running Example: Segmentation 1.Given examples of foreground and background – Estimate distribution parameters – Perform inference 2.Knowing that foreground and background are “normally distributed” 3.Some initial estimate, but no good knowledge of foreground and background