Presentation is loading. Please wait.

Presentation is loading. Please wait.

236607 Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.

Similar presentations


Presentation on theme: "236607 Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables."— Presentation transcript:

1 236607 Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables Expected Values and Moments Joint and Marginal Probability Means and variances Covariance matrices Univariate normal density Multivariate Normal densities Contents

2 236607 Visual Recognition Tutorial2 Random variable X is a variable which value is set as a consequence of random events, that is the events, which results is impossible to know in advance. A set of all possible results is called a sampling domain and is denoted by. Such random variable can be treated as a “indeterministic” function X which relates every possible random event with some value. We will be dealing with real random variables Probability distribution function is a function for which for every x Random variables, distributions, and probability density functions

3 236607 Visual Recognition Tutorial3 Let X be a random variable (d.r.v.) that can assume m different values in the countable set Let p i be the probability that X assumes the value v i : p i must satisfy: Mass function satisfy A connection between distribution and the mass function is given by Discrete Random Variable

4 236607 Visual Recognition Tutorial4 The domain of continuous random variable (c.r.v.) is uncountable. The distribution function of c.r.v can be defined as where the function p(x) is called a probability density function. It is important to mention, that a numerical value of p(x) is not a “probability of x”. In the continuous case p(x)dx is a value which approximately equals to probability Pr[x<X<x+dx] Continuous Random Variable

5 236607 Visual Recognition Tutorial5 Important features of the probability density function : Continuous Random Variable

6 236607 Visual Recognition Tutorial6 The mean or expected value or average of x is defined by If Y=g(X) we have: The variance is defined as: where  is the standard deviation of x. Expected Values and Moments

7 236607 Visual Recognition Tutorial7 Intuitively variance of x indicates distribution of its samples around its expected value (mean). Important property of the mean is its linearity: At the same time variance is not linear: The k-th moment of r.v. X is E[X k ] (the expected value is a first moment). The k -th central moment is Expected Values and Moments

8 236607 Visual Recognition Tutorial8 Let X and Y be 2 random variables with domains and For each pair of values we have a joint probability joint mass function The marginal distributions for x and y are defined as For c.r.v. marginal distributions can be calculated as Joint and Marginal Probability

9 236607 Visual Recognition Tutorial9 The variables x and y are said to be statistically independent if and only if The expected value of a function f(x,y) of two random variables x and y is defined as The means and variances are: Means and variances

10 236607 Visual Recognition Tutorial10 The covariance matrix  is defined as the square matrix whose ij th element  ij is the covariance of x i and x j : Covariance matrices

11 236607 Visual Recognition Tutorial11 From this we have the Cauchy-Schwartz inequality The correlation coefficient is normalized covariance It always. If the variables x and y are uncorrelated. If y=ax+b and a>0, then If a<0, then Question.Prove that if X and Y are independent r.v. then Cauchy-Schwartz inequality

12 236607 Visual Recognition Tutorial12 If the variables are statistically independent, the covariances are zero, and the covariance matrix is diagonal. The covariance matrix is positive semi-definite: if w is any d- dimensional vector, then. This is equivalent to the requirement that none of the eigenvalues of  can ever be negative. Covariance matrices

13 236607 Visual Recognition Tutorial13 The normal or Gaussian probability function is very important. In 1-dimension case, it is defined by probability density function The normal density is described as a "bell-shaped curve", and it is completely determined by. The probabilities obey Univariate normal density

14 236607 Visual Recognition Tutorial14 Suppose that each of the d random variables x i is normally distributed, each with its own mean and variance: If these variables are independent, their joint density has the form This can be written in a compact matrix form if we observe that for this case the covariance matrix is diagonal, i.e., Multivariate Normal densities

15 236607 Visual Recognition Tutorial15 and hence the inverse of the covariance matrix is easily written as Covariance matrices

16 236607 Visual Recognition Tutorial16 and Finally, by noting that the determinant of  is just the product of the variances, we can write the joint density in the form This is the general form of a multivariate normal density function, where the covariance matrix is no longer required to be diagonal. Covariance matrices

17 236607 Visual Recognition Tutorial17 The natural measure of the distance from x to the mean  is provided by the quantity which is the square of the Mahalanobis distance from x to . Covariance matrices

18 236607 Visual Recognition Tutorial18 where is a correlation coefficient; thus and after doing dot products in we get the expression for bivariate normal density: Example:Bivariate Normal Density

19 236607 Visual Recognition Tutorial19 The level curves of the 2D Gaussian are ellipses; the principal axes are in the direction of the eigenvectors of  and the different width correspond to the corresponding eigenvalues. For uncorrelated r.v. (  =0 ) the axes are parallel to the coordinate axes. For the extreme case of the ellipses collapse into straight lines (in fact there is only one independent r.v.). Marginal and conditional densities are unidimensional normal. Some Geometric Features

20 236607 Visual Recognition Tutorial20 Some Geometric Features

21 236607 Visual Recognition Tutorial21 Law of large numbers Let X 1, X 2,…,be a series of i.i.d. (independent and identically distributed) random variables with E[X i ]= . Then for S n = X 1 +…+ X n Central Limit Theorem Let X 1, X 2,…,be a series of i.i.d. r.v. with E[X i ]=  and variance var(X i )=  2. Then for S n = X 1 +…+ X n Law of Large Numbers and Central Limit Theorem


Download ppt "236607 Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables."

Similar presentations


Ads by Google