Mixtures of Gaussians and Advanced Feature Encoding Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Florent Perronnin, and Hervé.

Slides:



Advertisements
Similar presentations
Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
Advertisements

Improving the Fisher Kernel for Large-Scale Image Classification Florent Perronnin, Jorge Sanchez, and Thomas Mensink, ECCV 2010 VGG reading group, January.
Image Modeling & Segmentation
Aggregating local image descriptors into compact codes
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Clustering Beyond K-means
Clustering with k-means and mixture of Gaussian densities Jakob Verbeek December 3, 2010 Course website:
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
Segmentation and Fitting Using Probabilistic Methods
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 143, Brown James Hays 02/22/11 Many slides from Derek Hoiem.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/12.
Visual Recognition Tutorial
Packing bag-of-features ICCV 2009 Herv´e J´egou Matthijs Douze Cordelia Schmid INRIA.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Unsupervised Learning
Visual Recognition Tutorial
Student: Kylie Gorman Mentor: Yang Zhang COLOR-ATTRIBUTES- RELATED IMAGE RETRIEVAL.
Clustering & Dimensionality Reduction 273A Intro Machine Learning.
Object Recognition by Parts Object recognition started with line segments. - Roberts recognized objects from line segments and junctions. - This led to.
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Exercise Session 10 – Image Categorization
Computer Vision James Hays, Brown
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CSE 185 Introduction to Computer Vision Pattern Recognition.
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/31/15.
The EM algorithm, and Fisher vector image representation
MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/24/10.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
1 HMM - Part 2 Review of the last lecture The EM algorithm Continuous density HMM.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
MSRI workshop, January 2005 Object Recognition Collected databases of objects on uniform background (no occlusions, no clutter) Mostly focus on viewpoint.
Bayesian Parameter Estimation Liad Serruya. Agenda Introduction Bayesian decision theory Scale-Invariant Learning Bayesian “One-Shot” Learning.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
Epitomic Location Recognition A generative approach for location recognition K. Ni, A. Kannan, A. Criminisi and J. Winn In proc. CVPR Anchorage,
HMM - Part 2 The EM algorithm Continuous density HMM.
Jakob Verbeek December 11, 2009
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
Flat clustering approaches
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/22/11.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
CVC, June 4, 2012 Image categorization using Fisher kernels of non-iid image models Gokberk Cinbis, Jakob Verbeek and Cordelia Schmid LEAR team, INRIA,
Machine Learning and Category Representation Jakob Verbeek November 25, 2011 Course website:
NICTA SML Seminar, May 26, 2011 Modeling spatial layout for image classification Jakob Verbeek 1 Joint work with Josip Krapac 1 & Frédéric Jurie 2 1: LEAR.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Classification of unlabeled data:
Statistical Models for Automatic Speech Recognition
CS 2750: Machine Learning Dimensionality Reduction
Face Recognition and Feature Subspaces
Mixtures of Gaussians and Advanced Feature Encoding
By Suren Manvelyan, Crocodile (nile crocodile?) By Suren Manvelyan,
CS 1674: Intro to Computer Vision Scene Recognition
Bayesian Models in Machine Learning
Probabilistic Models with Latent Variables
Brief Review of Recognition + Context
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
EM Algorithm and its Applications
Presentation transcript:

Mixtures of Gaussians and Advanced Feature Encoding Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Florent Perronnin, and Hervé Jégou

Why do good recognition systems go bad? E.g. Why isn’t our Bag of Words classifier at 90% instead of 70%? Training Data – Huge issue, but not necessarily a variable you can manipulate. Learning method – Probably not such a big issue, unless you’re learning the representation (e.g. deep learning). Representation – Are the local features themselves lossy? Guest lecture Nov 8 th will address this. – What about feature quantization? That’s VERY lossy.

Standard Kmeans Bag of Words

Today’s Class More advanced quantization / encoding methods that represent the state-of-the-art in image classification and image retrieval. – Soft assignment (a.k.a. Kernel Codebook) – VLAD – Fisher Vector Mixtures of Gaussians

Motivation Bag of Visual Words is only about counting the number of local descriptors assigned to each Voronoi region Why not including other statistics?

We already looked at the Spatial Pyramid level 2 level 0 level 1 But today we’re not talking about ways to preserve spatial information.

Motivation Bag of Visual Words is only about counting the number of local descriptors assigned to each Voronoi region Why not including other statistics? For instance: mean of local descriptors

Motivation Bag of Visual Words is only about counting the number of local descriptors assigned to each Voronoi region Why not including other statistics? For instance: mean of local descriptors (co)variance of local descriptors

Simple case: Soft Assignment Called “Kernel codebook encoding” by Chatfield et al Cast a weighted vote into the most similar clusters.

Simple case: Soft Assignment Called “Kernel codebook encoding” by Chatfield et al Cast a weighted vote into the most similar clusters. This is fast and easy to implement (try it for Project 3!) but it does have some downsides for image retrieval – the inverted file index becomes less sparse.

A first example: the VLAD Given a codebook, e.g. learned with K-means, and a set of local descriptors :  assign:  compute: concatenate v i ’s + normalize Jégou, Douze, Schmid and Pérez, “Aggregating local descriptors into a compact image representation”, CVPR’10.  3 3 x v1v1 v2v2 v3v3 v4v4 v5v5 11  4 4  2 2  5 5 ① assign descriptors ② compute x-  i ③ v i =sum x-  i for cell i

A first example: the VLAD A graphical representation of Jégou, Douze, Schmid and Pérez, “Aggregating local descriptors into a compact image representation”, CVPR’10.

The Fisher vector Score function Given a likelihood function with parameters, the score function of a given sample X is given by:  Fixed-length vector whose dimensionality depends only on # parameters. Intuition: direction in which the parameters of the model should we modified to better fit the data.

Aside: Mixture of Gaussians (GMM) For Fisher Vector image representations, is a GMM. GMM can be thought of as “soft” kmeans. Each component has a mean and a standard deviation along each direction (or full covariance)

Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Photo: Jam343 (Flickr) Annotator Ratings

Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of “visual words”. Challenge: Discover frequently occurring object categories, without pre-trained appearance models.

Missing Data Problems: Segmentation You are given an image and want to assign foreground/background pixels. Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Foreground Background Slide: Derek Hoiem

Missing Data Problems: Segmentation Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Three steps: 1.If we had labels, how could we model the appearance of foreground and background? 2.Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? 3.How can we get both labels and appearance models at once? Foreground Background Slide: Derek Hoiem

Maximum Likelihood Estimation 1.If we had labels, how could we model the appearance of foreground and background? Foreground Background Slide: Derek Hoiem

Maximum Likelihood Estimation data parameters Slide: Derek Hoiem

Maximum Likelihood Estimation Gaussian Distribution Slide: Derek Hoiem

Maximum Likelihood Estimation Gaussian Distribution Slide: Derek Hoiem

Example: MLE >> mu_fg = mean(im(labels)) mu_fg = >> sigma_fg = sqrt(mean((im(labels)-mu_fg).^2)) sigma_fg = >> mu_bg = mean(im(~labels)) mu_bg = >> sigma_bg = sqrt(mean((im(~labels)-mu_bg).^2)) sigma_bg = >> pfg = mean(labels(:)); labelsim fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 Parameters used to Generate Slide: Derek Hoiem

Probabilistic Inference 2.Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? Foreground Background Slide: Derek Hoiem

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Slide: Derek Hoiem

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Slide: Derek Hoiem

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Slide: Derek Hoiem

Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label Slide: Derek Hoiem

Example: Inference >> pfg = 0.5; >> px_fg = normpdf(im, mu_fg, sigma_fg); >> px_bg = normpdf(im, mu_bg, sigma_bg); >> pfg_x = px_fg*pfg./ (px_fg*pfg + px_bg*(1-pfg)); im fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 Learned Parameters p(fg | im) Slide: Derek Hoiem

Figure from “Bayesian Matting”, Chuang et al Mixture of Gaussian* Example: Matting

Result from “Bayesian Matting”, Chuang et al. 2001

Dealing with Hidden Variables 3.How can we get both labels and appearance models at once? Foreground Background Slide: Derek Hoiem

Mixture of Gaussians mixture component component prior component model parameters Slide: Derek Hoiem

Mixture of Gaussians With enough components, can represent any probability density function – Widely used as general purpose pdf estimator Slide: Derek Hoiem

Segmentation with Mixture of Gaussians Pixels come from one of several Gaussian components – We don’t know which pixels come from which components – We don’t know the parameters for the components Slide: Derek Hoiem

Simple solution 1.Initialize parameters 2.Compute the probability of each hidden variable given the current parameters 3.Compute new parameters for each model, weighted by likelihood of hidden variables 4.Repeat 2-3 until convergence Slide: Derek Hoiem

Mixture of Gaussians: Simple Solution 1.Initialize parameters 2.Compute likelihood of hidden variables for current parameters 3.Estimate new parameters for each model, weighted by likelihood Slide: Derek Hoiem

Expectation Maximization (EM) Algorithm Goal: Jensen’s Inequality Log of sums is intractable See here for proof: for concave funcions, such as f(x)=log(x) Slide: Derek Hoiem

Expectation Maximization (EM) Algorithm 1.E-step: compute 2.M-step: solve Goal: Slide: Derek Hoiem

This looks like a soft version of kmeans!

EM for Mixture of Gaussians (on board) 1.E-step: 2.M-step:

EM for Mixture of Gaussians (on board) 1.E-step: 2.M-step: Slide: Derek Hoiem

EM Algorithm Maximizes a lower bound on the data likelihood at each iteration Each step increases the data likelihood – Converges to local maximum Common tricks to derivation – Find terms that sum or integrate to 1 – Lagrange multiplier to deal with constraints Slide: Derek Hoiem

Mixture of Gaussian demos Slide: Derek Hoiem

“Hard EM” Same as EM except compute z* as most likely values for hidden variables K-means is an example Advantages – Simpler: can be applied when cannot derive EM – Sometimes works better if you want to make hard predictions at the end But – Generally, pdf parameters are not as accurate as EM Slide: Derek Hoiem

Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Photo: Jam343 (Flickr) Annotator Ratings

Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of “visual words”. Challenge: Discover frequently occurring object categories, without pre-trained appearance models.

What’s wrong with this prediction? P(foreground | image) Slide: Derek Hoiem

Next class MRFs and Graph-cut Segmentation Slide: Derek Hoiem

Missing data problems Outlier Detection – You expect that some of the data are valid points and some are outliers but… – You don’t know which are inliers – You don’t know the parameters of the distribution of inliers Modeling probability densities – You expect that most of the colors within a region come from one of a few normally distributed sources but… – You don’t know which source each pixel comes from – You don’t know the Gaussian means or covariances Discovering patterns or “topics” – You expect that there are some re-occuring “topics” of codewords within an image collection. You want to… – Figure out the frequency of each codeword within each topic – Figure out which topic each image belongs to Slide: Derek Hoiem

Running Example: Segmentation 1.Given examples of foreground and background – Estimate distribution parameters – Perform inference 2.Knowing that foreground and background are “normally distributed” 3.Some initial estimate, but no good knowledge of foreground and background

FV formulas: The Fisher vector Relationship with the BOV Perronnin and Dance, “Fisher kernels on visual categories for image categorization”, CVPR’07.

FV formulas: gradient wrt to w ≈  soft BOV = soft-assignment of patch t to Gaussian i The Fisher vector Relationship with the BOV Perronnin and Dance, “Fisher kernels on visual categories for image categorization”, CVPR’

gradient wrt to  and  FV formulas: gradient wrt to w ≈  soft BOV = soft-assignment of patch t to Gaussian i  compared to BOV, include higher-order statistics (up to order 2) Let us denote: D = feature dim, N = # Gaussians BOV = N-dim FV = 2DN-dim The Fisher vector Relationship with the BOV Perronnin and Dance, “Fisher kernels on visual categories for image categorization”, CVPR’

gradient wrt to  and  FV formulas: gradient wrt to w ≈  soft BOV = soft-assignment of patch t to Gaussian i  compared to BOV, include higher-order statistics (up to order 2)  FV much higher-dim than BOV for a given visual vocabulary size  FV much faster to compute than BOV for a given feature dim The Fisher vector Relationship with the BOV Perronnin and Dance, “Fisher kernels on visual categories for image categorization”, CVPR’

The Fisher vector Dimensionality reduction on local descriptors Perform PCA on local descriptors:  uncorrelated features are more consistent with diagonal assumption of covariance matrices in GMM  FK performs whitening and enhances low-energy (possibly noisy) dimensions

The Fisher vector Normalization: variance stabilization  Variance stabilizing transforms of the form: (with  =0.5 by default) can be used on the FV (or the VLAD).  Reduce impact of bursty visual elements Jégou, Douze, Schmid, “On the burstiness of visual elements”, ICCV’09.

Datasets for image retrieval INRIA Holidays dataset: 1491 shots of personal Holiday snapshot 500 queries, each associated with a small number of results 1-11 results 1 million distracting images (with some “false false” positives) Hervé Jégou, Matthijs Douze and Cordelia Schmid Hamming Embedding and Weak Geometric consistency for large-scale image search, ECCV'08

Examples Retrieval Example on Holidays: From: Jégou, Perronnin, Douze, Sánchez, Pérez and Schmid, “Aggregating local descriptors into compact codes”, TPAMI’11.

Examples Retrieval Example on Holidays:  second order statistics are not essential for retrieval From: Jégou, Perronnin, Douze, Sánchez, Pérez and Schmid, “Aggregating local descriptors into compact codes”, TPAMI’11.

Examples Retrieval Example on Holidays:  second order statistics are not essential for retrieval  even for the same feature dim, the FV/VLAD can beat the BOV From: Jégou, Perronnin, Douze, Sánchez, Pérez and Schmid, “Aggregating local descriptors into compact codes”, TPAMI’11.

Examples Retrieval Example on Holidays:  second order statistics are not essential for retrieval  even for the same feature dim, the FV/VLAD can beat the BOV  soft assignment + whitening of FV helps when number of Gaussians  From: Jégou, Perronnin, Douze, Sánchez, Pérez and Schmid, “Aggregating local descriptors into compact codes”, TPAMI’11.

Examples Retrieval Example on Holidays:  second order statistics are not essential for retrieval  even for the same feature dim, the FV/VLAD can beat the BOV  soft assignment + whitening of FV helps when number of Gaussians   after dim-reduction however, the FV and VLAD perform similarly From: Jégou, Perronnin, Douze, Sánchez, Pérez and Schmid, “Aggregating local descriptors into compact codes”, TPAMI’11.

Examples Classification Example on PASCAL VOC 2007: From: Chatfield, Lempitsky, Vedaldi and Zisserman, “The devil is in the details: an evaluation of recent feature encoding methods”, BMVC’11. Feature dim mAP VQ25K55.30 KCB25K56.26 LLC25K57.27 SV41K58.16 FV132K61.69

Examples Classification Example on PASCAL VOC 2007: From: Chatfield, Lempitsky, Vedaldi and Zisserman, “The devil is in the details: an evaluation of recent feature encoding methods”, BMVC’11.  FV outperforms BOV-based techniques including: VQ: plain vanilla BOV KCB: BOV with soft assignment LLC: BOV with sparse coding Feature dim mAP VQ25K55.30 KCB25K56.26 LLC25K57.27 SV41K58.16 FV132K61.69

Examples Classification Example on PASCAL VOC 2007: From: Chatfield, Lempitsky, Vedaldi and Zisserman, “The devil is in the details: an evaluation of recent feature encoding methods”, BMVC’11.  FV outperforms BOV-based techniques including: VQ: plain vanilla BOV KCB: BOV with soft assignment LLC: BOV with sparse coding  including 2nd order information is important for classification Feature dim mAP VQ25K55.30 KCB25K56.26 LLC25K57.27 SV41K58.16 FV132K61.69

Packages The INRIA package: The Oxford package: Vlfeat does it too!

Summary We’ve looked at methods to better characterize the distribution of visual words in an image: – Soft assignment (a.k.a. Kernel Codebook) – VLAD – Fisher Vector Mixtures of Gaussians is conceptually a soft form of kmeans which can better model the data distribution.