Estimating the Likelihood of Statistical Models of Natural Image Patches Daniel Zoran ICNC – The Hebrew University of Jerusalem Advisor: Yair Weiss CifAR.

Slides:



Advertisements
Similar presentations
Part 2: Unsupervised Learning
Advertisements

Face Recognition Sumitha Balasuriya.
Bayesian Belief Propagation
Independent Component Analysis
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Gaussian Mixture.
CS Statistical Machine learning Lecture 13 Yuan (Alan) Qi Purdue CS Oct
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
Reducing Drift in Parametric Motion Tracking
Automatic Identification of Bacterial Types using Statistical Image Modeling Sigal Trattner, Dr. Hayit Greenspan, Prof. Shimon Abboud Department of Biomedical.
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
A Comprehensive Study on Third Order Statistical Features for Image Splicing Detection Xudong Zhao, Shilin Wang, Shenghong Li and Jianhua Li Shanghai Jiao.
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Model: Parts and Structure. History of Idea Fischler & Elschlager 1973 Yuille ‘91 Brunelli & Poggio ‘93 Lades, v.d. Malsburg et al. ‘93 Cootes, Lanitis,
A 4-WEEK PROJECT IN Active Shape and Appearance Models
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Slide 1 EE3J2 Data Mining EE3J2 Data Mining Lecture 10 Statistical Modelling Martin Russell.
LYU0603 A Generic Real-Time Facial Expression Modelling System Supervisor: Prof. Michael R. Lyu Group Member: Cheung Ka Shun ( ) Wong Chi Kin ( )
Pattern Recognition Topic 1: Principle Component Analysis Shapiro chap
AGC DSP AGC DSP Professor A G Constantinides© Estimation Theory We seek to determine from a set of data, a set of parameters such that their values would.
Dimensional reduction, PCA
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Incremental Learning of Temporally-Coherent Gaussian Mixture Models Ognjen Arandjelović, Roberto Cipolla Engineering Department, University of Cambridge.
Gaussian Information Bottleneck Gal Chechik Amir Globerson, Naftali Tishby, Yair Weiss.
Independent Component Analysis (ICA) and Factor Analysis (FA)
Modeling of Mel Frequency Features for Non Stationary Noise I.AndrianakisP.R.White Signal Processing and Control Group Institute of Sound and Vibration.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Principal Component Analysis Principles and Application.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
A Unifying Review of Linear Gaussian Models
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Multidimensional Data Analysis : the Blind Source Separation problem. Outline : Blind Source Separation Linear mixture model Principal Component Analysis.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
The Statistical Properties of Large Scale Structure Alexander Szalay Department of Physics and Astronomy The Johns Hopkins University.
Soft Sensor for Faulty Measurements Detection and Reconstruction in Urban Traffic Department of Adaptive systems, Institute of Information Theory and Automation,
Remote Sensing Supervised Image Classification. Supervised Image Classification ► An image classification procedure that requires interaction with the.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Computer Graphics and Image Processing (CIS-601).
CSSE463: Image Recognition Day 27 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
CS Statistical Machine learning Lecture 24
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
CSC2515: Lecture 7 (post) Independent Components Analysis, and Autoencoders Geoffrey Hinton.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Principal Component Analysis (PCA)
Principal Component Analysis (PCA).
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
CSSE463: Image Recognition Day 25 This week This week Today: Applications of PCA Today: Applications of PCA Sunday night: project plans and prelim work.
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
CSSE463: Image Recognition Day 27
Interest Points EE/CSE 576 Linda Shapiro.
Background on Classification
Probabilistic Models with Latent Variables
Wavelet-Based Denoising Using Hidden Markov Models
10701 / Machine Learning Today: - Cross validation,
CSSE463: Image Recognition Day 25
CSSE463: Image Recognition Day 25
Parametric Methods Berlin Chen, 2005 References:
Probabilistic Surrogate Models
NOISE FILTER AND PC FILTERING
Presentation transcript:

Estimating the Likelihood of Statistical Models of Natural Image Patches Daniel Zoran ICNC – The Hebrew University of Jerusalem Advisor: Yair Weiss CifAR NCAP Summer School

Natural Image Statistics Natural scenes and images exhibit very distinctive statistics A lot of research has been made in this field since the 1950s Important in image processing, computer vision, computational neuroscience and more…

Natural Image Statistics Properties The space of all possible images is huge – For a 256 gray levels, NxN sized matrix, there are possible images – Natural Images occupy a tiny fraction of this space Some statistical properties of natural images: – Translation invariance – Power law spectrum – – Scale invariance – Non-Gaussianity of marginal statistics - (more on that later)

Estimating the Likelihood of different statistical models During the years, a lot of models for natural image distributions have been proposed It is hard to test the validity of such models, especially when comparing one model to the other A step towards this – estimating the (log) likelihood of a given model and comparing the results with other models

Estimating the Likelihood of different models Variable sized patches were extracted from natural images Different models assumed A training set was used to estimate various parameters of the model Likelihood was calculated over a test set 5000 patches in each set Source images are mostly JPEGs from a Panasonic digital camera, portraying outdoor scenes Also tested on standard images (Lena, Boat and Barbara – PNG format)

The models – 1D Gaussian A 1D Gaussian distribution for every pixel – Mean and Variance estimated directly from the sample – The likelihood of an image x under this model is: – Where: This model captures nothing about natural images

Results – 1D Gaussian Test Set 4 (Noise) Test Set 3 (PNG)Test Set 2 (JPG)Test Set 1 (JPG) Patch Size x x x x x x20

The models – Multidimensional Gaussian with PCA Using the covariance matrix, rotate the images in the image space towards directions of maximum variance (PCA) A Multidimensional Gaussian distribution for the components: Where the covariance matrix is estimated from the training set: This captures the Power-Law spectrum property

Results – Multidimensional Gaussian Test Set 4 (Noise) Test Set 3 (PNG)Test Set 2 (JPG)Test Set 1 (JPG) Patch Size x x x x x x20

The models – Gaussian Mixture Model with PCA Using the same rotation scheme (PCA), now assume a Gaussian Mixture Model for the marginal filter response distributions Under this model: Where W’s rows are the eigenvectors of the covariance matrix The GMM parameters were found using EM This captures both the Power-Law spectrum and the sparseness properties

Results – GMM with PCA Test Set 4 (Noise) Test Set 3 (PNG)Test Set 2 (JPG)Test Set 1 (JPG) Patch Size x x x x x x20

The models – Generalized Gaussian with PCA Finally, instead of using a GMM, we now use a Generalized Gaussian This has the advantage of having less parameters, while still capturing Sparseness: Parameters were obtained directly from the training set

Results – Generalized Gaussian with PCA Test Set 4 (Noise) Test Set 3 (PNG)Test Set 2 (JPG)Test Set 1 (JPG) Patch Size x x x x x x20

The GG shape parameter During the analysis of the data we have encountered a strange phenomena Marginal distributions get wider as we go measure higher frequency filter responses This is not due to increase in variance (which drops as we go to high frequencies) We modeled this using the shape parameter obtained from the samples

Shape parameter for test set 1

Shape parameter for test set 2

Shape parameter for test set 3

Shape parameter for test set 4 - PNG

Shape parameter for noise test set

Conclusion This is (very) early work, still in progress A lot of things left to do: – Try more models and filter (ICA is in progress) – Actually compare the different models – Try to make some sense out of the shape of the distributions – Look into higher order dependencies and correlations A lot more…

Thank you! Questions?