Download presentation
Presentation is loading. Please wait.
1
Review of Probability
2
Important Topics 1 Random Variables and Probability Distributions 2 Expected Values, Mean, and Variance 3 Two Random Variables 4 The Normal, Chi-Squared, , and t Distributions 5 Random Sampling and the Sampling Distribution 6 Large-Sample Approximations to Sampling Distributions
3
Definitions Outcomes: the mutually exclusive potential results of a random process. Probability: the proportion of the time that the outcome occurs. Sample space: the set of all possible outcomes. Event: A subset of the sample space. Random variables: a random variable is a numerical summary of a random outcome.
4
Probability distribution: discrete variable
List of all possible [x, p(x)] pairs x = value of random variable (outcome) p(x) = probability associated with value Mutually exclusive (no overlap) Collectively exhaustive (nothing left out) 0 p(x) 1 for all x p(x) = 1
5
Probability distribution: discrete variable
Probabilities of events. Cumulative probability distribution.
7
Example: Bernoulli distribution.
Let G be the gender of the next new person you meet, where G=0 indicates that the person is male and G=1 indicates that she is female. The outcomes of G and their probabilities are G= 1 with probability p = 0 with probability 1-p
8
Probability distribution: continuous variable
Mathematical formula Shows all values, x, and frequencies, f(x) f(x) Is Not Probability Value (Value, Frequency) Frequency f(x) a b x (Area Under Curve) f x dx ( ) All x a b 1 0, Properties
9
Probability density function (p.d.f.).
10
Probability distribution: Continuous Variable
Cumulative probability distribution.
11
Uniform Distribution x f(x) d c a b 1. Equally likely outcomes
2. Probability density function 3. Mean and Standard Deviation
12
Expected Values, Mean, and Variance
13
Expected value of a Bernoulli random variable
14
Expected value of a continuous random variable
Let f(Y) is the p.d.f of random variable Y , then the expected value of Y is
15
Variance, Standard Deviation, and Moments
16
Variance of a Bernoulli random variable
The mean of the Bernoulli random variable G is , so its variance is The standard deviation is
17
Moments The expected value of Y r is called the r th moments of the random variable Y . That is the r th moment of Y is E(Y r ). The mean of Y , E(Y), is also called the first moment of Y .
18
Mean and Variance of a Linear Function of a Random Variable
Suppose X is a random variable with mean and variance , and Then the mean and variance of Y are and the standard deviation of Y is
19
Two Random Variables Joint and Marginal Distributions
The joint probability distribution of two discrete random variables, say X and Y , is the probability that the random variables simultaneously take on certain values, say x and . The joint probability distribution can be written as the function
21
The marginal probability distribution of a random variable Y is just another name for its probability distribution.
23
Conditional distribution of Y given X=x is
Conditional expectation of Y given X=x is
26
The mean of Y is the weighted average of the conditional expectation of Y given X, weighted by the probability distribution of X.
27
Stated differently, the expectation of Y is the expectation of the conditional expectation of Y given X, that is where the inner expectation is computed using the conditional distribution of Y given X and the outer expectation is computed using the marginal distribution of X. This is known as the law of iterated expectations.
29
Conditional variance The variance of Y conditional on X is the variance of the conditional distribution of Y given X.
31
Independence Two random variable X and Y are independently distributed, or independent, if knowing the value of one of the variables provides no information about the other. That is, X and Y are independent if for all values of x and ,
32
State differently, X and Y are independent if
That is, the joint distribution of two independent random variables is the product of their marginal distributions.
33
Covariance and Correlation
Covariance One measure of the extent to which two random variables move together is their covariance.
34
Correlation The correlation is an alternative measure of dependence between X and Y that solves the “unit” problem of covariance. The random variables X and Y are said to be uncorrelated if Corr(X, Y) = 0. The correlation is always between -1 and 1.
35
The Mean and Variance of Sums of Random Variables
37
Normal, Chi-Squared, , and t Distributions
The Normal Distribution The probability density function of a normal distributed random variable (the normal p.d.f.) is where exp(x) is the exponential function of x. The factor ensures that
38
The normal distribution with mean μ and variance σ2 is expressed as N(μ, σ2 ).
39
Black swan theory The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight. The theory was developed by Nassim Nicholas Taleb The disproportionate role of high-profile, hard-to-predict, and rare events. The non-computability of the probability of the consequential rare events using scientific methods (owing to the very nature of small probabilities). The psychological biases that make people individually and collectively blind to uncertainty and unaware of the massive role of the rare event in historical affairs.
40
The main idea in Taleb's book is not to attempt to predict black swan events, but to build robustness against negative ones that occur and be able to exploit positive ones. Taleb contends that banks and trading firms are very vulnerable to hazardous black swan events and are exposed to unpredictable losses. Taleb is highly critical of the widespread use of the normal distribution model as the basis for calculating risk.
41
The standard normal distribution is the normal distribution with mean μ= 0 and variance σ2 = 1 and is denoted N(0, 1). The standard normal distribution is often denoted by Z and its cumulative distribution function is denoted by Ф. Accordingly, Pr(Z ≤ c)= Ф(c), where c is a constant.
42
Key Concept 2.4 Copyright © 2003 by Pearson Education, Inc.
44
The Empirical Rule (normal distribution)
Copyright © 2011 Pearson Education, Inc. 36 of 42
45
90%: 95%: 99%:
46
The bivariate normal distribution
The bivariate normal p.d.f. for the two random variables X and Y is where is the correlation between X and Y .
47
Important properties for normal distribution.
1. If X and Y have a bivariate normal distribution with covariance , and if a and b are two constants, then 2. The marginal distribution of each of the two variables is normal. This follows by setting a = 1; b = 0 in If = 0, then X and Y are independent.
48
The Chi-squared distribution
The Chi-squared distribution is the distribution of the sum of m squared independent standard normal random variables. The distribution depends on m, which is called the degrees of freedom of the chi-squared distribution. A chi-squared distribution with m degrees of freedom is denoted .
49
distribution where and are independent. When n is ∞, .
The distribution is the distribution of a random variable with a chi-squared distribution with m degrees of freedom, divided by m. Equivalently, the distribution is the distribution of the average of m squared standard normal random variables.
50
The Student t Distribution
The Student t distribution with m degrees of freedom is defined to be the distribution of the ratio of a standard normal random variable, divided by the square root of an independently distributed chi-squared random variable with m degrees of freedom divided by m.
51
That is, let Z be a standard normal random variable, let W be a random variable with a chi-squared distribution with m degrees of freedom, and let Z and W be independently distributed. Then When m is 30 or more, the Student t distribution is well approximated by the standard normal distribution, and t∞ distribution equals the standard normal distribution Z.
52
Random Sampling Simple random sampling is the simplest sampling scheme in which n objects are selected at random from a population and each member of the population is equally likely to be included in the sample. Since the members of the population included in the sample are selected at random, the values of the observations Y1, … , Yn are themselves random.
53
i.i.d. draws. Because Y1, … , Yn are randomly drawn from the same population, the marginal distribution of Yi is the same for eacn i =1, … , n. Y1, … , Yn are said to be identically distributed. When Y1, … , Yn are drawn from the same distribution and are indepentently distributed, they are said to be independently and identically distributed, or i.i.d.
54
Sampling Distribution of the Sample Average
The sample average of the n observations Y1, … , Yn is Because Y1, … , Yn are random, their average is random and has a probability distribution. The distribution of is called the sampling distribution of .
55
Mean and Variance of Suppose Y1, … , Yn are i.i.d. and let and denote the mean and variance of Yi . Then
56
Sampling distribution of if Y is normally distributed
The linear combination of normally distributed random variable is also normally distributed (equation 2. 42) Therefore,
57
Large-Sample Approximations to Sampling Distributions
Two approaches to characterizing sample distributions. Exact distribution, or finite sample distribution when the distribution of Y is known. Asymptotic distribution: large-sample approximation to the sampling distribution.
58
Law of Large Numbers The law of large numbers states that, under general conditions, will be near with very high probability when n is large. The property that is near with increasing probability as n increases is called convergence in probability, or consistency. The law of large numbers states that, under certain conditions, converges in probability to , or, is consistent for
59
Key Concept 2.6 Copyright © 2003 by Pearson Education, Inc.
60
The conditions for the law of large numbers are
Yi , i=1, …, n, are i.i.d. The variance of Yi , , is finite.
61
Developing Sampling Distributions
Suppose There’s a Population ... Population size, N = 4 Random variable, x Values of x: 1, 2, 3, 4 Uniform distribution © T/Maker Co.
62
Population Characteristics
Summary Measures Population Distribution P(x) .3 .2 .1 .0 x Have students verify these numbers. 1 2 3 4
63
All Possible Samples of Size n = 2
2nd Observation 1 2 3 4 1st Obs 16 Sample Means 2nd Observation 1 2 3 4 1st Obs 1,1 1,2 1,3 1,4 1.0 1.5 2.0 2.5 2,1 2,2 2,3 2,4 1.5 2.0 2.5 3.0 3,1 3,2 3,3 3,4 2.0 2.5 3.0 3.5 4,1 4,2 4,3 4,4 2.5 3.0 3.5 4.0 Sample with replacement
64
Sampling Distribution of All Sample Means
2nd Observation 1 2 3 4 1st Obs 16 Sample Means Sampling Distribution of the Sample Mean .0 .1 .2 .3 1.0 1.5 2.0 2.5 3.0 3.5 4.0 P(x) x 1.0 1.5 2.0 2.5 1.5 2.0 2.5 3.0 2.0 2.5 3.0 3.5 2.5 3.0 3.5 4.0
65
Sampling Distribution
Comparison Population Sampling Distribution P(x) .0 .1 .2 .3 1 2 3 4 .0 .1 .2 .3 1.0 1.5 2.0 2.5 3.0 3.5 4.0 P(x) x x
67
Chebyshev Inequality For any random variable X ~ (μ, ), and for any k > 0, The inequality can be rewritten as Therefore, the inequality says that at least with 1 − probability, the random variable will fall within the region μ ± kσ
68
Proof of Weak LLN For any X
According to Chebyshev Inequality, for any ε > 0, we have
69
Formal definitions of consistency and law of large numbers.
Consistency and convergence in probability. Let S1 , S2 , … , Sn , … be a sequence of random variables. For example, Sn could be the sample average of a sample of n observations of the random variable Y . The sequence of random variables {Sn} is said to converge in probability to a limit, μ, if the probability that Sn is within ±δ of μ tends to one as n → ∞, as long as the constant is positive.
70
That is, if and only if Pr [| Sn – μ |≥δ] → 0 as n → ∞ for every δ > 0. If , then Sn is said to be a consistent estimator of μ . The law of large numbers. If Y1, … , Yn are i.i.d., E(Yi) = and Var(Yi) < ∞, then
71
The Central Limit Theorem
The central limit theorem says that, under general conditions, the distribution of is well approximated by a normal distribution when n is large. Since the mean of is and its variance if , when n is large the distribution of is approximately N( , ). Accordingly, is well approximated by the standard normal distribution N(0,1)
74
Convergence in distribution.
Let F1, … , Fn , … be a sequence of cumulative distribution functions corresponding to a sequence of random variables, S1, … , Sn , … . Then the sequence of random variables Sn is said to converge in distribution to S (denoted as ) if the distribution functions {Fn} converge to F.
75
That is, if and only if where the limit holds at all points t at which the limiting distribution F is continuous. The distribution F is called the asymptotic distribution of Sn .
76
The central limit theorem
If Y1, … , Yn are i.i.d. and 0 < < ∞, then In other words, the asymptotic distribution of is N(0,1) .
77
Slutsky’s theorem Slutsky’s theorem combines consistency and convergence in distribution. Suppose that , where a is a constant, and Then
78
Continuous mapping theorem
If g is a continuous function, then
79
But, how large of n is “large enough?”
The answer is: it depends on the distribution of the underlying Yi that make up the average. At one extreme, if the Yi are themselves normally distributed, then is exactly normally distributed for all n. In contrast, when Yi is far from normally distributed, then this approximation can require n = 30 or even more.
80
Example: A skewed distribution.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.