Presentation is loading. Please wait.

Presentation is loading. Please wait.

Probability: Many Random Variables (Part 2) Mike Wasikowski June 12, 2008.

Similar presentations


Presentation on theme: "Probability: Many Random Variables (Part 2) Mike Wasikowski June 12, 2008."— Presentation transcript:

1 Probability: Many Random Variables (Part 2) Mike Wasikowski June 12, 2008

2 Contents Indicator RV’s Derived RV’s Order RV’s Continuous RV Transformations

3 Indicator RV’s I A = 1 if event A occurs, 0 if not Consider A 1, A 2, …, A n events, I 1, I 2, …, I n their indicator RV’s, and p 1, p 2, …, p n the probabilities of events A i occurring Then Σ j I j is the total number of events that occur Mean of a sum of RV’s = sum of the mean of the RV’s (regardless of dependence), so E(Σ j I j ) = Σ j E(I j ) = Σ j p j

4 Indicator RV’s If all values of p i are equal, then E(Σ j I j ) = np When all events are independent, we calculate variance of number of events that occur as p 1 (1-p 1 )+…+p n (1-p n ) If all values of p i are equal and all events are independent, variance is np(1-p) Thus, we have a binomial distribution

5 Ex: Sequencing EST Libraries Transcription: DNA → mRNA → amino acids/proteions EST (expressed sequence tag): a sequence of 100+ base pairs of mRNA Different genes get expressed at different levels inside a cell Abundance class L: where a cell contains L copies of an mRNA “species” Generate an EST DB by sampling with replacement from the mRNA pool, see less rare species less often How does the number of samples affect the proportion of rare species we will see?

6 Ex: Sequencing EST Libraries Using indicator RV’s makes this problem easy to solve Let I a = 1 if a is in the S samples, 0 if not Number of species in abundance class L = Σ a I a We know each I a has the same mean, so E(Σ a I a ) = n L p L

7 Ex: Sequencing EST Libraries Let p L = 1-r L, where r L is the probability this species is not in the database r L = (1-L/N) S Thus, we get E(Σ a I a ) = n L (1- (1-L/N) S )

8 Derived RV’s Previously saw how we find joint distributions and density functions These joint pdf’s can be used to define many new RV’s Sum Average Orderings Because many statistical operations use these RV’s, knowing properties of their distributions is important

9 Sums and Averages Two most important derived RV’s S n = X 1 +X 2 +…+X n X = S n /n Mean of S n = nμ, variance = nσ 2 Mean of X = μ, variance = σ 2 /n These properties generalize to well-behaved functions of RV’s and vectors of RV’s as well Many important applications in probability and statistics use sums and averages

10 Central Limit Theorem If X 1, X 2,..., X n are iid with a finite mean and variance, as n→∞, the standardized RV (X-μ)sqrt(n)/σ converges to an RV ~ N(0,1) Image from Wikipedia: Central Limit Theorem

11 Order Statistics Involve the ordering of n iid RV’s Call smallest X (1), next smallest X (2), up to biggest X (n) X min = X (1), X max = X (n) We know that these order statistics are distinct because P(X (i) = X (j) ) = 0 for independent continuous RV’s

12 Minimum RV (X min) Let X 1, X 2,..., X n be iid as X If X min ≥ x, then for each X i, X i ≥ x P(X min ≥ x) = P(X ≥ x) n, also written as 1- F min (x) = (1-F X (x)) By differentiating, we get the density function f min (x) = n f X (x) (1-F X (x)) n-1

13 Maximum RV (X max ) Let X 1, X 2,..., X n be iid as X If X max ≤ x, then for each X i, X i ≤ x P(X max ≤ x) = P(X ≤ x) n, also written as F max (x) = (F X (x)) n By differentiating, we get the density function f min (x) = n f X (x) (F X (x)) n-1

14 Density function of X (i) Let h be a small value, ignore events of probability o(h) Consider the event that u < X (i) < u+h In this event, i-1 RV's are less than u, one is between u and u+h, the remaining exceed u+h Multinomial event with n trials and 3 outcomes We have an approximation of P(u < X < u+h) ~ f X (u)h

15 Density function of X (i)

16 Continuous RV Transformations Consider n continuous RV's, X 1, X 2,..., X n let V 1 = V 1 (X 1, X 2,..., X n ), V 2,..., V n defined similarly we then have a mapping from (X 1, X 2,..., X n ) to (V 1, V 2,..., V n ) If the mapping is 1-1 and differentiable with a differentiable inverse, we can define the Jacobian matrix Jacobian transformations are used to find marginal functions of one RV when that would be otherwise difficult Used in ANOVA as well as BLAST

17 Questions?


Download ppt "Probability: Many Random Variables (Part 2) Mike Wasikowski June 12, 2008."

Similar presentations


Ads by Google