Presentation is loading. Please wait.

Presentation is loading. Please wait.

Why do we Need Statistical Model in the first place? Any image processing algorithm has to work on a collection (class) of images instead of a single one.

Similar presentations


Presentation on theme: "Why do we Need Statistical Model in the first place? Any image processing algorithm has to work on a collection (class) of images instead of a single one."— Presentation transcript:

1 Why do we Need Statistical Model in the first place? Any image processing algorithm has to work on a collection (class) of images instead of a single one Mathematical model gives us the abstraction of common properties of the images within the same class Model is our hypothesis and images are our observation data In physics, can F=ma explain the relationship between force and acceleration?  In image processing, can this model fit this class of images?

2 Introduction to Texture Synthesis Motivating applications Texture synthesis vs. image denoising Statistical image modeling revisited Modeling correlation/dependency Transform-domain texture synthesis Nonparametric texture synthesis Performance evaluation issue

3 Computer Graphics in SPORE

4 What is Image/Texture Model? speech Analysis Synthesis Pitch, LPC Residues … texture Analysis Synthesis P(X): parametric /nonparametric

5 How do we Tell the Goodness of a Model? Synthesis (in statistical language, it is called sampling) Hypothesized model Does the generated sample (experimental result) look like the data of our interests? A fair coin? Does the generated sequence (experimental result) contain the same number of Heads and Tails? Flip the coin Computer simulation

6 Discrete Random Variables (taken from EE465) Example III: For a gray-scale image (L=256), we can use the notation p(r k ), k = 0,1, …, L - 1, to denote the histogram of an image with L possible gray levels, r k, k = 0,1, …, L - 1, where p(r k ) is the probability of the kth gray level (random event) occurring. The discrete random variables in this case are gray levels. Question: What is wroning with viewing all pixels as being generated from an independent identically distributed (i.i.d.) random variable

7 To Understand the Problem Theoretically, if all pixels are indeed i.i.d., then random permutation of pixels should produce another image of the same class (natural images) Experimentally, we can write a simple MATLAB function to implement and test the impact of random permutation

8 Permutated image with identical histogram to lena

9 Random Process Random process is the foundation for doing research in the field of communication and signal processing (that is why EE513 is the core requirement for qualified exam) Random processes is the vector generalization of (scalar) random variables

10 Correlation and Dependency (N=2) If the condition holds, then the two random variables are said to be uncorrelated. From our earlier discussion, we know that if x and y are statistically independent, then p(x, y) = p(x)p(y), in which case we write Thus, we see that if two random variables are statistically independent then they are also uncorrelated. The converse of this statement is not true in general.

11 Covariance of two Random Variables The moment µ 11 is called the covariance of x and y.

12 Recall: How to Calculate E(XY)? … X Y Empirical solution: Note: When Y=X, we are getting autocorrelation

13 Stationary Process* TT+K P(X 1,…,X N )=P(X K+1,…,X K+N ) for any K,N (all statistics is time invariant) N N space/time location order of statistics

14 Gaussian Process With mean vector m and covariance matrix C For convenience, we often assume zero mean (if it is nonzero mean, we can subtract the mean) The question is: is the distribution of observation data Gaussian or not? For Gaussian process, it is stationary as long as its first and second order statistics are time-invariant

15 The Curse of Dimensionality Even for a small-size image such as 64-by-64, we need to model it by a random process in 4096-dimensional space (R 4096 ) whose covariance matrix is sized by 4096-by-4096 Curse of dimensionality was pointed out by E. Bellman in 1960s; but even computing resource today cannot handle the brute-force search of nearest-neighbor search in relatively high-dimensional space.

16 Markovian Assumption Andrei A. Markov 1856 - 1922 Pafnuty L. Chebyshev 1821 - 1894 Andrey N. Kolmogorov 1903 - 1987

17 A Simple Idea The future is determined by the present but is independent of the past Note that stationarity and Markovianity are two “orthogonal” perspectives of imposing constraints to random processes

18 Markov Process N-th order Markovian N past samples Parametric or non-parametric characterization

19 Autoregressive (AR) Model Parametric model (Linear Prediction) An infinite impulse response (IIR) filter z-transform

20 Example: AR(1) Autocorrelation function a=0.9 k r(k)

21 Yule-Walker Equation Covariance C

22 Wiener Filtering In practice, we do not know autocorrelation functions but only observation data X 1,…,X M Approach 1: empirically estimate r(k) from X 1,…,X M Approach 2: Formulate the minimization problem of Exercise: you can verify they end up with the same results

23 Least-Square Estimation M equations, N unknown variables

24 Least-Square Estimation (Con’d) If you write it out, it is exactly the empirical way of estimating autocorrelation functions – now you have got the third approach R xx rxrx

25 From 1D to 2D X m,n 1 23 4 5 1 234 5 6 Causal neighborhood Noncausal neighborhood 678 Causality of neighborhood depends on different applications (e.g., coding vs. synthesis)

26 Experimental Justifications original Analysis Synthesis random excitation AR model parameters

27 Failure Example (I) Analysis and Synthesis N=8,M=4096 Another way to look at it: if X and Y are two images of disks, will (X+Y)/2 produce another disk image?

28 Failure Example (II) Analysis and Synthesis Note that the failure reason of this example is different from the last example (N is not large enough) N=8,M=4096

29 Summary for AR Modeling We start from AR models because they are relatively simple and well understood (not just for images but also for speech coding, stock market prediction …) AR model parameters are related to the second-order statistics by Yule-Walker equation AR model is equivalent to IIR filtering (linear prediction decorrelates the input signal)

30 Improvement over AR Model Doubly stochastic process* In stationary Gaussian process, second- order statistics are time/spatial invariance In doubly stochastic process, second-order statistics (e.g., covariance) are modeled by another random process with hidden variables Windowing technique To estimate spatially varying statistics

31 Why do We need Windows? Nothing to do with Microsoft All images have finite dimensions – they can be viewed as the “windowed” version of natural scenes Any empirical estimation of statistical attributes (e.g., mean, variance) is based on the assumption that all N samples observe the same distribution However, how do we know this assumption is satisfied?

32 1D Rectangular Window X(n) n W=(2T+1)

33 2D Rectangular Window W=(2T+1) Loosely speaking, parameter estimation from a localized window is a compromised solution to handle spatially varying statistics Such idea is common to other types of non-stationary signals too (e.g., short-time speech processing)

34 Example As window slides though the image, we will observe that AR model parameters vary from location to location A B C Q: AR coefficients at B and C differ from those at A but for different reasons, Why?

35 What is Next? Apply linear transformations A detour of wavelet transforms Wavelet-space statistical models Application into texture synthesis From parametric to nonparametric Patch-based nonparametric models Texture synthesis examples Application into image inpainting


Download ppt "Why do we Need Statistical Model in the first place? Any image processing algorithm has to work on a collection (class) of images instead of a single one."

Similar presentations


Ads by Google