236607 Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.

Slides:



Advertisements
Similar presentations
Random Processes Introduction (2)
Advertisements

Chapter 2 Multivariate Distributions Math 6203 Fall 2009 Instructor: Ayona Chatterjee.
Random Variables ECE460 Spring, 2012.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Multivariate Distributions
DEPARTMENT OF HEALTH SCIENCE AND TECHNOLOGY STOCHASTIC SIGNALS AND PROCESSES Lecture 1 WELCOME.
Review of Basic Probability and Statistics
Probability Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
SUMS OF RANDOM VARIABLES Changfei Chen. Sums of Random Variables Let be a sequence of random variables, and let be their sum:
Visual Recognition Tutorial
Probability theory 2011 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different definitions.
Data Basics. Data Matrix Many datasets can be represented as a data matrix. Rows corresponding to entities Columns represents attributes. N: size of the.
Tch-prob1 Chapter 4. Multiple Random Variables Ex Select a student’s name from an urn. S In some random experiments, a number of different quantities.
CS Pattern Recognition Review of Prerequisites in Math and Statistics Prepared by Li Yang Based on Appendix chapters of Pattern Recognition, 4.
1 Engineering Computation Part 5. 2 Some Concepts Previous to Probability RANDOM EXPERIMENT A random experiment or trial can be thought of as any activity.
Probability theory 2008 Outline of lecture 5 The multivariate normal distribution  Characterizing properties of the univariate normal distribution  Different.
Random Variable and Probability Distribution
Lecture II-2: Probability Review
The Multivariate Normal Distribution, Part 1 BMTRY 726 1/10/2014.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
The Multivariate Normal Distribution, Part 2 BMTRY 726 1/14/2014.
5-1 Two Discrete Random Variables Example Two Discrete Random Variables Figure 5-1 Joint probability distribution of X and Y in Example 5-1.
Modern Navigation Thomas Herring
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
NIPRL Chapter 2. Random Variables 2.1 Discrete Random Variables 2.2 Continuous Random Variables 2.3 The Expectation of a Random Variable 2.4 The Variance.
Lecture 28 Dr. MUMTAZ AHMED MTH 161: Introduction To Statistics.
Separate multivariate observations
Today Wrap up of probability Vectors, Matrices. Calculus
Review of Probability.
: Appendix A: Mathematical Foundations 1 Montri Karnjanadecha ac.th/~montri Principles of.
OUTLINE Probability Theory Linear Algebra Probability makes extensive use of set operations, A set is a collection of objects, which are the elements.
Jointly Distributed Random Variables
Principles of Pattern Recognition
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
Moment Generating Functions
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
CHAPTER 4 Multiple Random Variable
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Continuous Distributions The Uniform distribution from a to b.
Discrete Random Variables A random variable is a function that assigns a numerical value to each simple event in a sample space. Range – the set of real.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Expectation for multivariate distributions. Definition Let X 1, X 2, …, X n denote n jointly distributed random variable with joint density function f(x.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Operations on Multiple Random Variables
Geology 6600/7600 Signal Analysis 02 Sep 2015 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
Review of Probability. Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables.
1 Probability and Statistical Inference (9th Edition) Chapter 4 Bivariate Distributions November 4, 2015.
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Pattern Recognition Mathematic Review Hamid R. Rabiee Jafar Muhammadi Ali Jalali.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
Pattern Recognition Mathematic Review Hamid R. Rabiee Jafar Muhammadi Ali Jalali.
Geology 6600/7600 Signal Analysis 04 Sep 2014 © A.R. Lowry 2015 Last time: Signal Analysis is a set of tools used to extract information from sequences.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Probability Theory and Parameter Estimation I
Outline Introduction Signal, random variable, random process and spectra Analog modulation Analog to digital conversion Digital transmission through baseband.
The distribution function F(x)
ECE 417 Lecture 4: Multivariate Gaussians
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Multivariate Methods Berlin Chen
The Multivariate Normal Distribution, Part I
Multivariate Methods Berlin Chen, 2005 References:
Chapter 2. Random Variables
Presentation transcript:

Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables Expected Values and Moments Joint and Marginal Probability Means and variances Covariance matrices Univariate normal density Multivariate Normal densities Contents

Visual Recognition Tutorial2 Random variable X is a variable which value is set as a consequence of random events, that is the events, which results is impossible to know in advance. A set of all possible results is called a sampling domain and is denoted by. Such random variable can be treated as a “indeterministic” function X which relates every possible random event with some value. We will be dealing with real random variables Probability distribution function is a function for which for every x Random variables, distributions, and probability density functions

Visual Recognition Tutorial3 Let X be a random variable (d.r.v.) that can assume m different values in the countable set Let p i be the probability that X assumes the value v i : p i must satisfy: Mass function satisfy A connection between distribution and the mass function is given by Discrete Random Variable

Visual Recognition Tutorial4 The domain of continuous random variable (c.r.v.) is uncountable. The distribution function of c.r.v can be defined as where the function p(x) is called a probability density function. It is important to mention, that a numerical value of p(x) is not a “probability of x”. In the continuous case p(x)dx is a value which approximately equals to probability Pr[x<X<x+dx] Continuous Random Variable

Visual Recognition Tutorial5 Important features of the probability density function : Continuous Random Variable

Visual Recognition Tutorial6 The mean or expected value or average of x is defined by If Y=g(X) we have: The variance is defined as: where  is the standard deviation of x. Expected Values and Moments

Visual Recognition Tutorial7 Intuitively variance of x indicates distribution of its samples around its expected value (mean). Important property of the mean is its linearity: At the same time variance is not linear: The k-th moment of r.v. X is E[X k ] (the expected value is a first moment). The k -th central moment is Expected Values and Moments

Visual Recognition Tutorial8 Let X and Y be 2 random variables with domains and For each pair of values we have a joint probability joint mass function The marginal distributions for x and y are defined as For c.r.v. marginal distributions can be calculated as Joint and Marginal Probability

Visual Recognition Tutorial9 The variables x and y are said to be statistically independent if and only if The expected value of a function f(x,y) of two random variables x and y is defined as The means and variances are: Means and variances

Visual Recognition Tutorial10 The covariance matrix  is defined as the square matrix whose ij th element  ij is the covariance of x i and x j : Covariance matrices

Visual Recognition Tutorial11 From this we have the Cauchy-Schwartz inequality The correlation coefficient is normalized covariance It always. If the variables x and y are uncorrelated. If y=ax+b and a>0, then If a<0, then Question.Prove that if X and Y are independent r.v. then Cauchy-Schwartz inequality

Visual Recognition Tutorial12 If the variables are statistically independent, the covariances are zero, and the covariance matrix is diagonal. The covariance matrix is positive semi-definite: if w is any d- dimensional vector, then. This is equivalent to the requirement that none of the eigenvalues of  can ever be negative. Covariance matrices

Visual Recognition Tutorial13 The normal or Gaussian probability function is very important. In 1-dimension case, it is defined by probability density function The normal density is described as a "bell-shaped curve", and it is completely determined by. The probabilities obey Univariate normal density

Visual Recognition Tutorial14 Suppose that each of the d random variables x i is normally distributed, each with its own mean and variance: If these variables are independent, their joint density has the form This can be written in a compact matrix form if we observe that for this case the covariance matrix is diagonal, i.e., Multivariate Normal densities

Visual Recognition Tutorial15 and hence the inverse of the covariance matrix is easily written as Covariance matrices

Visual Recognition Tutorial16 and Finally, by noting that the determinant of  is just the product of the variances, we can write the joint density in the form This is the general form of a multivariate normal density function, where the covariance matrix is no longer required to be diagonal. Covariance matrices

Visual Recognition Tutorial17 The natural measure of the distance from x to the mean  is provided by the quantity which is the square of the Mahalanobis distance from x to . Covariance matrices

Visual Recognition Tutorial18 where is a correlation coefficient; thus and after doing dot products in we get the expression for bivariate normal density: Example:Bivariate Normal Density

Visual Recognition Tutorial19 The level curves of the 2D Gaussian are ellipses; the principal axes are in the direction of the eigenvectors of  and the different width correspond to the corresponding eigenvalues. For uncorrelated r.v. (  =0 ) the axes are parallel to the coordinate axes. For the extreme case of the ellipses collapse into straight lines (in fact there is only one independent r.v.). Marginal and conditional densities are unidimensional normal. Some Geometric Features

Visual Recognition Tutorial20 Some Geometric Features

Visual Recognition Tutorial21 Law of large numbers Let X 1, X 2,…,be a series of i.i.d. (independent and identically distributed) random variables with E[X i ]= . Then for S n = X 1 +…+ X n Central Limit Theorem Let X 1, X 2,…,be a series of i.i.d. r.v. with E[X i ]=  and variance var(X i )=  2. Then for S n = X 1 +…+ X n Law of Large Numbers and Central Limit Theorem