Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistics of natural images May 30, 2010 Ofer Bartal Alon Faktor 1.

Similar presentations


Presentation on theme: "Statistics of natural images May 30, 2010 Ofer Bartal Alon Faktor 1."— Presentation transcript:

1 Statistics of natural images May 30, 2010 Ofer Bartal Alon Faktor 1

2 Outline Motivation Classical statistical models New MRF model approach Learning the models Applications and results 2

3 Motivation Big variance in appearance Can we even dream of modeling this? 3

4 Motivation Main questions: – Do all natural images obey some common “rules”? – How can one find these “rules”? – How to use “rules” for computer vision tasks? 4

5 Motivation Why bother to model at all? “Noise”, uncertainty Model helps choose the “best” possible answer Lets see some examples 5 Natural image model

6 Noise-blur removal Consider the classical De-convolution problem Can be formulated as linear set of equations: H + 6

7 = Noise-blur removal 7

8 Inpainting Missing lines of identity matrix = missing pixels 8 (under-determined system)

9 Motivation Problems: – Unknown noise – H may be singular (Deconvolution) – H may be under-determined (Inpainting) So there can be many solutions. How can we find the “right” one? 9

10 Motivation Goal: Estimate x – Assume: Prior model of natural image: Prior model of noise: – Use MAP estimator to find x: 10

11 Energy Minimization problem The MAP problem can be reformulated as: 11

12 Proof: 12

13 Classical models Smoothness prior (model of image gradients) – Gaussian prior (LS problem) – L1 Prior and sparse prior (IRLS problem) Image gradient 13

14 Gaussian Priors Assume: – Gaussian priors on gradients of x: – Gaussian noise: Using this assumption: 14

15 Non-Gaussian Priors Empirical results: image gradients have a Non-Gaussian heavy tailed distribution We assume L1 or sparse prior We solve it by IRLS –iterative re-weighted LS 15

16 De-convolution Results Gaussian priorSparse prior Blurred image Good results on simple images 16

17 De-noising Results De-noising result Noisy image Poor results on real natural images 17

18 Classical models – Pro’s and Con’s Advantages: – Simple and easy to implement Disadvantages: – Too Heuristic – Only one property - Smoothness – Bias towards totally smooth images: 18

19 Going Beyond Classical Models 19

20 Modern Approach Model is based on image properties Choose properties using image dataset Questions: 1.What types of properties? Responses to linear filters. 2.How to find good properties? Either pre-determined bank or learn from data. 3.How should combine properties to one distribution? We will see how. 20

21 Mathematical framework Want: A model p(I) of real distribution f(I). Computationally hard: – A 100x100 pixel image has 10,000 variables Can explicitly model only a few dimensions at a time 21 Arrow = viewpoint of few dimensions

22 Mathematical framework A viewpoint is a response to a linear filter A distribution over these responses is a marginal of real distribution f(I) (Marginal = Distribution over a subset of variables) 22 Arrow = marginal of f(I)

23 Mathematical framework If p(I) and f(I) have the same marginal distributions of linear filters then p(I)=f(I) (proposition by Zhu and Mumford) “Hope”: If we will choose K “good” filters then p(I) and f(I) will be “close”. 23 How do we measure “close”?

24 Distance between distributions Kullback-Leibler divergence: Problem - f(I) unknown Proposition - use instead: Measures fit of model to observations 24

25 Illustration 25

26 Getting synthesized images Get synthesized images by sampling the learned model Sample using Markov Chain Monte Carlo (MCMC). Drawback: Learning process is slow 26

27 Our model P(I) – A MRF MRF = Markov Random Field A MRF is based on a graph G=(V,E). V – pixels E – between pixels that affect each other Our distribution is the MRF: 27

28 Simple grid MRF Here, cliques are edges Every pixel belongs to 4 cliques 28

29 MRF We limit ourselves to: – Cliques of fixed size (over-lapping patches) – Same for all cliques We get: 29

30 MRF simulation 30

31 Histogram simulation Histogram of a marginal 31

32 MRF In terms of convolutions: Denote: Set of potential functions: Denote: Set of filters: 32

33 MRF - A simple example Cliques of size 1 Pixels are i.i.d and distributed by grayscale histogram grayscale histogram 33 Drawback: cliques are too small

34 MRF - Another simple example Clique = whole image Result: Uniform distribution on images in dataset Px 34 Drawback: cliques are too big

35 Formulation as Gibbs models All pixels are i.i.d and 35

36 Formulation as Gibbs models Uniform distribution on the image dataset 36

37 Revisiting classical models Actually, the classical model is a pairwise MRF: Has cliques of size 2: Has only 2 linear filters => 2 marginals No guarantee that p(I) will be close to f(I) 37

38 Comparison between models Classical Linear MRF 38

39 Zhu and Mumford’s approach (1997) We want to find K “good” filters Strategy: – Start off with a bank B of possible filters – Choose subset that minimizes the distance between p(I) and f(I) – For computational reasons, choose filters one by one using a greedy method 39

40 MRF simulation 40

41 Choosing the next filter AIG = the difference between the model p(I) and the data from the viewpoint of marginal AIF = the difference in between different images in dataset from the viewpoint of marginal 41

42 Algorithm – Filter selection Bank of filters IC Model 42

43 Algorithm 43

44 Learning the potentials Model Calculate update Init 44 (Using maximum entropy on P)

45 The bank of filters Filter types: – Intensity filter (1X1) – Isotropic filters - Laplacian of Gaussian (LG, ) – Directional filters - Gabor (Gcos, Gsin) Computation in different scales - image pyramid Laplacian of GaussianGabor 45

46 Running example of algorithm Experiment I Use only small filters 46

47 Results All learned potentials have a diffusive nature 47

48 Running example of algorithm Experiment II Only gradient filters, in different scales Small filters -> diffusive potential (as expected) Surprisingly: Large filters -> reactive potentials 48 DiffusiveReactive

49 The discovery of reactive potentials 49

50 Examples of the synthesized images Experiment IExperiment II This image is more “natural” because it has some regions with sharp boundaries 50

51 Outline We have seen: – MRF models – Selection of filters from a bank – Learning potentials Now: – Data-driven filters – Analytic results for simple potentials – Making sense in results – Applications 51

52 Roth and Black’s approach filterspotentials Chosen from bankLearn a-parametrically Learn from dataLearn parametrically Learn together 52

53 Motivation – model of natural patches Why learn filters from data? Inspiration from models of natural patches: – Sparse coding – Component analysis – Product of experts 53

54 Motivation – Sparse Coding of patches Goal: find a set s.t. Learn from database of natural patches Only few filters should fire on a given patch 54

55 Motivation – Component analysis Learn by component analysis: – PCA – ICA Results in “filters like” components – PCA – first components look like contrast filters – ICA - components look like Gabor filters 55

56 PCA results high low 56

57 ICA results Independent filters Can derive model for patches: 57

58 Motivation – Product of experts More sophisticated model for natural patches: Training of MLE => “intuitive” filters: texture contrast 58

59 extension of POE to FOE: Field of experts (FOE) 59 Roth S., Black M. J., Fields of experts IJCV, 2009

60 The experts Student-t experts 60

61 Meaning of Higher means: – Punishes high responses more severely – A filter with higher weight 61

62 Learning the model Model MCMC init random

63 Update rule: For we use MCMC – very slow Learning the model Finding ML of 63

64 Contrastive divergence (CD) algorithm (Hinton, 2002) Start Markov Chain from “good” initial guess – X (data distribution) Run MCMC for only j steps Samples of MCMC will be close enough to the model distribution New Update rule: 64

65 Results of learning FOE Filters aren’t “intuitive” 65

66 Basis for representing the filters Instead of learning filters we can learn the filters by basis rotation two options: – Whitened basis – “Inverse” whitened basis is the covariance matrix of natural image patches 66

67 So far… filterspotentials Chosen from bankLearned a-parametrically diffusive reactive Small filtersLarge filters non-intuitive 67

68 So far… filterspotentials Learned from databaseLearned parametrically non-intuitive 68

69 What now? Revisiting POE and FOE with Gaussian potentials Relation to non-Gaussian potentials Making sense of previous results Weiss Y., Freeman W. T. What makes a good model of natural images?. CVPR, 2007 69

70 Gaussian POE 70

71 Claim: Z is constant for any set of K orthonormal vectors This has an analytic solution – the K minor components of the data Gaussian POE 71

72 Non-intuitive high-frequency filters Reminder - PCA Results Example of learned filters high low 72

73 Gaussian FOE 73

74 Gaussian FOE 74

75 Gaussian FOE satisfies: => Optimal filters have high frequencies 75

76 Non-Gaussian potentials -> modeled by GSM Properties of GFOE hold for GSM Gaussian Scale Mixture (GSM) 76

77 Revisiting FOE Student t expert – fit GSM Filters have the property of Natural imageRoth and Black filters high-frequency filters 77

78 Learning FOE with fixed filters Algorithm prefers high-frequency filters 78

79 Conclusion For Gaussian potentials and GSM’s: learning => High frequency filters Experimental evidence to this phenomena Maybe there is a “logic” behind this non-intuitive result? 79

80 Making Sense of results Criterion for “good” filters for patches – Rarely fire on natural images and fire frequently on all other images Patches from Natural images Histogram of filter responses 80 White noise

81 Making Sense of results An image was modeled by what you don’t expect to find in it This is satisfied by the classical prior of smooth gradients But why limit ourselves to intuitive filters? Maybe non-intuitive filters can do better… 81

82 reactive diffusive White noise Patches from Natural images Revisiting diffusive and reactive potentials White noise Patches from Natural images 82

83 Inference We learned a model We can use it for inference problems – Corrupted information – Missing information Exact inference – Loopy BP Approximate inference - gradient based optimization 83

84 Belief Propagation Observed data is incorporated to model by 84

85 Belief Propagation Message passing Algorithm Exact only on tree MRFs Efficient only on pairwise MRFs 85

86 Alternative by Roth and Black Reminder: Approximate inference by gradient-based optimization : Advantage: Low computational cost Drawback: only local minimum if not convex Uncertainty \Noise modelLearned model 86

87 Partition function => No need to estimate partition function We get: (Doesn’t depend on ) 87

88 The gradient step How to derivate the second term? By a mathematical “trick” we get: 88

89 Assume Gaussian noise So the Gradient step is: De-noising 89

90 Results 90

91 Results 91

92 Results Original Noisy (20.29dB) FOE (28.72dB) Poritilla (Wavelets) (28.9dB) Non-local means (28.21dB) Standard Non-Linear diffusion (27.18dB) State of the art General prior 92

93 Results on Berkeley database Wiener filter Non-Linear diffusion FOE Poritilla1 Poritilla2 Output PSNR Low noise High noise Input PSNR Low noise High noise Input PSNR 93

94 How many 3x3 filters to take? Number of filters Size of filter – 3X3 Performance start saturating when we reach 8 filters 94

95 Dependence on size and shape of clique What is the best filter? 95

96 Random and Fixed filters FOE – learned filters random filters Fixed filters 96

97 Inpainting - Reminder 97 Problem: pixels outside mask can change Solution: constraint them

98 Inpainting Assume pixels outside mask M don’t change So the gradient step is: Advanced Topics In Computer Vision Course Spring 2010 Advanced Topics In Computer Vision Course Spring 2010 0-1 MaskImage we want to inpaint 98

99 Results 99

100 Results 100

101 Results FOEBertalmio FOEBertalmio PSNR29.06dB27.56dB SSIM0.93710.9167 101

102 Pro’s and Con’s Perform well on narrow straws or small holes (even if they cover most of the image) Isn’t able to fill large holes Isn’t designed to handle textures 102

103 Thank you for Listening… 103


Download ppt "Statistics of natural images May 30, 2010 Ofer Bartal Alon Faktor 1."

Similar presentations


Ads by Google