Presentation is loading. Please wait.

Presentation is loading. Please wait.

Laws of division of casual sizes. Binomial law of division.

Similar presentations


Presentation on theme: "Laws of division of casual sizes. Binomial law of division."— Presentation transcript:

1 Laws of division of casual sizes. Binomial law of division.

2 Sensor Model The prior (|I) The sensor reliability P(|I)
Likelihood p(y|,,I)

3 Outline Random experiments Samples and Sample Space Events
Probability Space Axioms of Probability Conditional Probability Bayes’ Rule Independence of Events

4 Random Experiments A random experiment is an experiment in which the outcome varies in a unpredictable fashion when the experiment is repeated under the same condition. A random experiment is specified by stating an experimental procedure and a set of one or more measurements or observations. Examples: E1:Toss a coin three times and note the sides facing up (heads or tail) E2:Pick a number at random between zero and one. E3: Poses done by a rookie dancer

5 Samples and Sample Space
A sample point (o) or an outcome of a random experiment is defined as a result that cannot be decomposed into other results. Sample Space (S): defined as the set of all possible outcomes from a random experiment. Countable or discrete sample space, one-to-one correspondence between outcomes and integers Uncountable or continuous sample space

6 Events A event is a subset of the sample space S, a set of samples.
Two special events: Certain event: S Impossible or null event: 

7 Probability Space {S,E,P}
S: Sample space, the space of the outcomes from a random experiment {o} E: Event space, collection of subsets of the sample space, {A} P: Probability measure of a event P(A), ranges [0,1], encoding how likely an event will happen. S

8 Axioms of Probability 0P(A) P(S)=1
If A ∩ B =, then P(A U B)=P(A)+P(B) Given a sequence of event, Ai, if  ij, Ai  Bj =, P(U i=1 Ai )= i=1 P(Ai) referred to as countable additivity

9 Some Corollaries P(Ac) = 1 – P(A) P() = 0 P(A)  1
Given a sequence of event, Ai,..., An, if  ij, Ai ∩ Bj =, P(U ni=1 Ai )= ni=1 P(Ai) P(AUB)=P(A)+P(B)-P(A ∩ B) B A A ∩ B

10 Conditional Probability
Imagine that P(A) is proportional to the size of the area B A A ∩ B S

11 Theorem of Total Probability
Let {Bi} be a partition of the sample space S A B1 B2 B7 B4 B6 B5 B3 P(A)=  7i=1 P(A ∩ Bi ) = 7i=1 P(A | Bi ) P(Bi)

12 Bayes’ Rule Let {Bi} be a partition of the sample space S.
Suppose that event A occurs, what is the probability of the event Bj? By the definition of conditional probability we have a priori a posteriori

13 Independence of Events
If knowledge of the occurrence of an event B does not alter the probability of some other event A, then it would be natural to say that event A is independent of B. The most common application of the independence concept is in making the assumption that the events of separate experiments are independent, which are referred as independent experiments.

14 An example of Independent Events
Experiment: Toss a coin twice E1: the first toss is a head E2: the second toss is a tail Consider the experiment constructed by concatenating two separate random experiment, toss a coin once. H T (T,H) (T,T) (H,H) (H,T)

15 Last time … Random experiments Samples and Sample Space Events
Probability Space Axioms of Probability Conditional Probability Bayes’ Rule Independence of Events

16 Random Variables A random variable X is a function that assigns a real number, X(), to each outcome  in the sample space of a random experiment. S X() Real line x SX

17 Random Variables Let SX be the set of the sample space.
X() can be considered as a new random experiments with outcomes X() as a function of , the outcome of the original experiment. S X() Real line x SX

18 Examples E1: Toss a coin three times S={HHH,HHT,HTH,HTT, THH,THT,TTH,TTT} X()=number of heads in three coins tosses. Note that sometimes a few  share the same value of X(). SX={0,1,2,3} X is then a random variable taking on the values in the set SX If the outcome  of some experiment is already a numerical value, we can immediately reinterpret the outcome as a random variable defined by X()= .

19 Probabilities and Random Variables
Let A be an event of the original experiment. There is a corresponding equivalent event B in the new experiment, with X() as outcome, such that A={S: X()  B} or B={X()SX : A} S SX Real line X() B A

20 Probabilities and Random Variables
P(B)=P(A)=P ({: X()B}) Two typical events: B:{X=x} B: {X in I} S SX Real line X() B A

21 The Cumulative Distribution Function
The cumulative distribution function (cdf) of a random variable X is defined as the probability of the event {Xx}: FX(x)=P(Xx) for -<x<+ In term of the underling random experiment, FX(x)=P(Xx) =P ({: X()  x}) The cdf is simply a convenient way of specifying the probability of all semi-infinite intervals (-,x].

22 Major Properties of cdf
0 FX(x) 1 limx=1 limx-=0 FX(x) is a non-decreasing function of x, that is, if a < b, then FX(a)  FX(b).

23 Probability of an event
Let A = {a<Xb} and b>a P(A)=P{a<Xb}=FX(b) - FX(a).

24 The Probability Mass Function
Discrete random variables are those random variables taking values at the countable set of points. The probability mass function (pmf) is the set of probability pX(xk)=P(X=xk) of the elements in SX. S={HHH,HHT,HTH,HTT,THH,THT,TTH,TTT} SX={0,1,2,3}, pX(0)=1/8, pX(1)=3/8, pX(2)=3/8, pX(3)=1/8 1 2 3 X 1/8 1/2 7/8 FX(x) X 1 2 3 1/8 3/8 pX(x)

25 The Probability Density Function
A continuous random variable is defined as a random variable whose cdf is continuous everywhere, and which, in addition, is sufficiently smooth that it can be written as an integral of some nonnegative function f(x): The probability density function of X (pdf) is defined as the derivative of FX(x):

26 The Probability Density Function
pX(x) a b When a b, P(aX b) pX((a+b)/2) |b-a| X Support of X

27 An Example pX(x) = 1, when x [0,1], otherwise 0 X 1 pX(x) X 1 FX(x)

28 FX|Yy(x)=P(Xx|Yy)=P(Xx,Y y)/P(Yy)
Multiple R.V. Joint cdf : FX,Y(x,y)=P(Xx,Y y) Conditional cdf: if P(Yy) > 0, FX|Yy(x)=P(Xx|Yy)=P(Xx,Y y)/P(Yy) Joint pdf: pX,Y(x,y) the 2nd order derivatives of the joint cdf, usually independent of the order when pdfs are continuous. Marginal pdf: pY (y) = X pX,Y(x,y)dx Conditional pdf: pX|Y=y(x)=pX,Y(x,y)/ pY (y)

29 Expectation The expectation of a random variable is given by the weighted average of the values in the support of the random variable

30 Smoothing Property of Conditional Expectation
EY|X {Y|X=x}=g(x) E{Y}=EX{EY|X {Y|X=x}}

31 Fundamental Theorem of Expectations
Let Y=g(X) Recall that E{Y}=EX{EY|X {Y|X=x}}

32 Var(X)=X (x-E(X))2 pX(x) dx
Variance of a R.V. Weighted difference from the expectation Var(X)=X (x-E(X))2 pX(x) dx

33 Last Time … Random Variables cdf, pmf, pdf, Expectation, variances,

34 Correlation and Covariance of Two R.V.
Let X and Y be two random variables. The correlation between X and Y is given by E{XY}= X,Y xy pX,Y(x,y) dxdy Covariance of X and Y is given by COV(X,Y)=E{(X-E{X})(Y-E{Y})} =X,Y (X-E{X})(Y-E{Y}) pX,Y(x,y) dxdy =E{XY}-E{X}E{Y} When COV(X,Y)=0 or E{XY}=E{X}E{Y}, X and Y are (linearly) uncorrelated. When E{XY}=0, X and Y are orthogonal.

35 Correlation coefficient

36 Independence  pX,Y(x,y) = pX (x)pY (y)  x, y
Independent R.V.s If pX|Y=y(x)=pX,Y(x,y)/ pY (y)= pX (x) for all x and y, X and Y are independent random variables, i.e. Independence  pX,Y(x,y) = pX (x)pY (y)  x, y

37 Independent v.s. Uncorrelated
Independent  Uncorrelated Uncorrelated  Independent Example: Uncorrelated but dependent random variables: Let  U[0,2] and X=cos(), Y=sin() E{X}= (1/2)02 cos() d = 0, E{Y}=0; COV(X,Y)= (1/2)02 cos()sin() d = (1/4)02 sin(2) d = 0 If X and Y are jointly Gaussian, Independent Uncorrelated

38 Covariance Matrix of Random Vector
X = (X1,X2,…,Xn)T The covariance matrix of random vector X is given by CX = E{(X-E{X})(X-E{X})T} CX(i,j) = COV(Xi,Xj) Properties of CX Symmetric, CX = CXT, i.e. CX(i,j) = CX(j,i) Semi-positive definite Rn T CX  = 0 Since VAR(T X) = T CX  = 0

39 First Order Markov Process
Let {X1,X2,…,Xn} be a sequence of random variables (or vectors), e.g. the joint angle vectors in a gait cycle over a period of time. We call this process is a first order Markov process if the following Markov property is true: P(Xn=xn|Xn-1=xn-1,…,X1=x1) = P(Xn=xn|Xn-1=xn-1) i.e. P(Future|Present,past)=P(Future|Present) Process with memory limited to one step only

40 Chain Rule P(Xn=xn,Xn-1=xn-1,…,X1=x1)
= P(Xn=xn|Xn-1=xn-1,…,X1=x1) P(Xn-1=xn-1,…,X1=x1) = P(Xn=xn|Xn-1=xn-1) P(Xn-1=xn-1,…,X1=x1) = P(Xn=xn|Xn-1=xn-1) P(Xn-1=xn-1|Xn-2=xn-2)… P(X1=x1) = P(X1=x1) k=2n P(Xk=xk|Xk-1=xk-1)

41 Dynamic Systems System and Observation equations Chain rule , ,

42 Discrete State Markov Chains
Given a finite discrete set S of states, a Markov chain process possesses one of these states at each unit of time. The process either stays in the same state or moves to some other state in S. S1 S2 S3 1/2 1/3 2/3 1/6

43 Good luck


Download ppt "Laws of division of casual sizes. Binomial law of division."

Similar presentations


Ads by Google