Chapter 4: Mathematical Expectation:

Slides:



Advertisements
Similar presentations
Chapter 4: Mathematical Expectation:
Advertisements

Let X 1, X 2,..., X n be a set of independent random variables having a common distribution, and let E[ X i ] = . then, with probability 1 Strong law.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.2. Expected Values of Random Variables Jiaping.
Chapter 4 Mathematical Expectation.
April 2, 2015Applied Discrete Mathematics Week 8: Advanced Counting 1 Random Variables In some experiments, we would like to assign a numerical value to.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Independence of random variables
Assignment 2 Chapter 2: Problems  Due: March 1, 2004 Exam 1 April 1, 2004 – 6:30-8:30 PM Exam 2 May 13, 2004 – 6:30-8:30 PM Makeup.
Continuous Random Variables and Probability Distributions
Week 51 Theorem For g: R  R If X is a discrete random variable then If X is a continuous random variable Proof: We proof it for the discrete case. Let.
Chapter 16: Random Variables
Chapter 6: Some Continuous Probability Distributions:
Chris Morgan, MATH G160 February 3, 2012 Lecture 11 Chapter 5.3: Expectation (Mean) and Variance 1.
Review of Probability and Statistics
4.1 Mathematical Expectation
Random Variables and Probability Distributions
4.2 Variances of random variables. A useful further characteristic to consider is the degree of dispersion in the distribution, i.e. the spread of the.
Chapter 3 Random Variables and Probability Distributions 3.1 Concept of a Random Variable: · In a statistical experiment, it is often very important to.
K. Shum Lecture 16 Description of random variables: pdf, cdf. Expectation. Variance.
1 Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND STATISTICS FOR SCIENTISTS AND ENGINEERS Systems.
Variance and Covariance
Lectures prepared by: Elchanan Mossel elena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Continuous Distributions The Uniform distribution from a to b.
1 Chapter 16 Random Variables. 2 Expected Value: Center A random variable assumes a value based on the outcome of a random event.  We use a capital letter,
Chapter 5 Discrete Probability Distributions
Sampling Distribution of the Sample Mean. Example a Let X denote the lifetime of a battery Suppose the distribution of battery battery lifetimes has 
Chapter 4 DeGroot & Schervish. Variance Although the mean of a distribution is a useful summary, it does not convey very much information about the distribution.
Probability & Statistics I IE 254 Summer 1999 Chapter 4  Continuous Random Variables  What is the difference between a discrete & a continuous R.V.?
Chapter 5.6 From DeGroot & Schervish. Uniform Distribution.
STA347 - week 51 More on Distribution Function The distribution of a random variable X can be determined directly from its cumulative distribution function.
Probability Refresher. Events Events as possible outcomes of an experiment Events define the sample space (discrete or continuous) – Single throw of a.
7 sum of RVs. 7-1: variance of Z Find the variance of Z = X+Y by using Var(X), Var(Y), and Cov(X,Y)
Chapter 5 Joint Probability Distributions Joint, n. 1. a cheap, sordid place. 2. the movable place where two bones join. 3. one of the portions in which.
Lecture 3 1 Recap Random variables Continuous random variable Sample space has infinitely many elements The density function f(x) is a continuous function.
Topic 5: Continuous Random Variables and Probability Distributions CEE 11 Spring 2002 Dr. Amelia Regan These notes draw liberally from the class text,
Engineering Statistics ECIV 2305
6.4 Application of the Normal Distribution: Example 6.7: Reading assignment Example 6.8: Reading assignment Example 6.9: In an industrial process, the.
Chapter 5. Continuous Random Variables. Continuous Random Variables Discrete random variables –Random variables whose set of possible values is either.
Continuous Random Variables and Probability Distributions
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
The Variance of a Random Variable Lecture 35 Section Fri, Mar 26, 2004.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
1 Dr. Jerrell T. Stracener, SAE Fellow Leadership in Engineering EMIS 7370/5370 STAT 5340 : PROBABILITY AND STATISTICS FOR SCIENTISTS AND ENGINEERS Systems.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
Week 61 Poisson Processes Model for times of occurrences (“arrivals”) of rare phenomena where λ – average number of arrivals per time period. X – number.
Lecture 3 B Maysaa ELmahi.
Variance and Covariance
Section 7.3: Probability Distributions for Continuous Random Variables
Expectations of Random Variables, Functions of Random Variables
Applied Discrete Mathematics Week 11: Relations
Chapter 16 Random Variables.
Chapter 8: Fundamental Sampling Distributions and Data Descriptions:
Discrete Probability Distributions
Chapter 16 Random Variables.
Chapter 15 Random Variables.
How accurately can you (1) predict Y from X, and (2) predict X from Y?
Mean and Standard Deviation
Chebychev, Hoffding, Chernoff
4.1 Mathematical Expectation
4.1 Mathematical Expectation
Independence of random variables
Handout Ch 4 實習.
Random Variables and Probability Distributions
AP Statistics Chapter 16 Notes.
Probability Continued Chapter 6
Chapter 8: Fundamental Sampling Distributions and Data Descriptions:
Chapter 6: Some Continuous Probability Distributions:
4.1 Mathematical Expectation
4.1 Mathematical Expectation
Mathematical Expectation
Presentation transcript:

Chapter 4: Mathematical Expectation:   4.1 Mean of a Random Variable: Definition 4.1: Let X be a random variable with a probability distribution f(x). The mean (or expected value) of X is denoted by X (or E(X)) and is defined by: Example 4.1: (Reading Assignment) Example: (Example 3.3) A shipment of 8 similar microcomputers to a retail outlet contains 3 that are defective and 5 are non-defective. If a school makes a random purchase of 2 of these computers, find the expected number of defective computers purchased

Let X = the number of defective computers purchased. Solution: Let X = the number of defective computers purchased. In Example 3.3, we found that the probability distribution of X is: x 1 2 F(x)=p(X=x) or:

The expected value of the number of defective computers purchased is the mean (or the expected value) of X, which is: = (0) f(0) + (1) f(1) +(2) f(2) (computers) Example 4.3: Let X be a continuous random variable that represents the life (in hours) of a certain electronic device. The pdf of X is given by: Find the expected life of this type of devices.

Solution: (hours) Therefore, we expect that this type of electronic devices to last, on average, 200 hours.

Find E[g(X)], where g(X)=(X 1)2. Theorem 4.1: Let X be a random variable with a probability distribution f(x), and let g(X) be a function of the random variable X. The mean (or expected value) of the random variable g(X) is denoted by g(X) (or E[g(X)]) and is defined by: Example: Let X be a discrete random variable with the following probability distribution x 1 2 F(x) Find E[g(X)], where g(X)=(X 1)2.

Solution: g(X)=(X 1)2 = (01)2 f(0) + (11)2 f(1) +(21)2 f(2) = (1)2 + (0)2 +(1)2

Example: In Example 4.3, find E . Solution:

4.2 Variance (of a Random Variable): The most important measure of variability of a random variable X is called the variance of X and is denoted by Var(X) or . Definition 4.3: Let X be a random variable with a probability distribution f(x) and mean . The variance of X is defined by: Definition: The positive square root of the variance of X, ,is called the standard deviation of X. Note: Var(X)=E[g(X)], where g(X)=(X )2

The variance of the random variable X is given by: Theorem 4.2: The variance of the random variable X is given by: where Example 4.9: Let X be a discrete random variable with the following probability distribution x 1 2 3 F(x) 0.15 0.38 0.10 0.01 Find Var(X)= .

= (0) f(0) + (1) f(1) +(2) f(2) + (3) f(3) Solution: = (0) f(0) + (1) f(1) +(2) f(2) + (3) f(3) = (0) (0.51) + (1) (0.38) +(2) (0.10) + (3) (0.01) = 0.61 1. First method: =(00.61)2 f(0)+(10.61)2 f(1)+(20.61)2 f(2)+ (30.61)2 f(3) =(0.61)2 (0.51)+(0.39)2 (0.38)+(1.39)2 (0.10)+ (2.39)2 (0.01) = 0.4979 = (02) f(0) + (12) f(1) +(22) f(2) + (32) f(3) = (0) (0.51) + (1) (0.38) +(4) (0.10) + (9) (0.01) = 0.87 2. Second method: =0.87  (0.61)2 = 0.4979

Let X be a continuous random variable with the following pdf: Example 4.10: Let X be a continuous random variable with the following pdf: Find the mean and the variance of X. Solution: 5/3 17/6 17/6 – (5/3)2 = 1/8

4.3 Means and Variances of Linear Combinations of Random Variables: If X1, X2, …, Xn are n random variables and a1, a2, …, an are constants, then the random variable : is called a linear combination of the random variables X1,X2,…,Xn. Theorem 4.5: If X is a random variable with mean =E(X), and if a and b are constants, then: E(aXb) = a E(X)  b    aXb = a X ± b Corollary 1: E(b) = b (a=0 in Theorem 4.5) Corollary 2: E(aX) = a E(X) (b=0 in Theorem 4.5)

Example 4.16: Let X be a random variable with the following probability density function: Find E(4X+3). Solution: 5/4 E(4X+3) = 4 E(X)+3 = 4(5/4) + 3=8 Another solution: ; g(X) = 4X+3 E(4X+3) =

E(a1X1+a2X2+ … +anXn) = a1E(X1)+ a2E(X2)+ …+anE(Xn) Theorem: If X1, X2, …, Xn are n random variables and a1, a2, …, an are constants, then: E(a1X1+a2X2+ … +anXn) = a1E(X1)+ a2E(X2)+ …+anE(Xn)  Corollary: If X, and Y are random variables, then: E(X ± Y) = E(X) ± E(Y) Theorem 4.9: If X is a random variable with variance and if a and b are constants, then: Var(aXb) = a2 Var(X) 

Theorem: If X1, X2, …, Xn are n independent random variables and a1, a2, …, an are constants, then: Var(a1X1+a2X2+…+anXn) = Var(X1)+ Var (X2)+…+ Var(Xn)  Corollary: If X, and Y are independent random variables, then: ·      Var(aX+bY) = a2 Var(X) + b2 Var (Y) ·      Var(aXbY) = a2 Var(X) + b2 Var (Y) ·      Var(X ± Y) = Var(X) + Var (Y)

Example: Let X, and Y be two independent random variables such that E(X)=2, Var(X)=4, E(Y)=7, and Var(Y)=1. Find: 1.    E(3X+7) and Var(3X+7) 2.    E(5X+2Y2) and Var(5X+2Y2). Solution: 1. E(3X+7) = 3E(X)+7 = 3(2)+7 = 13 Var(3X+7)= (3)2 Var(X)=(3)2 (4) =36 2. E(5X+2Y2)= 5E(X) + 2E(Y) 2= (5)(2) + (2)(7)  2= 22 Var(5X+2Y2)= Var(5X+2Y)= 52 Var(X) + 22 Var(Y) = (25)(4)+(4)(1) = 104

4.4 Chebyshev's Theorem: * Suppose that X is any random variable with mean E(X)= and variance Var(X)= and standard deviation . * Chebyshev's Theorem gives a conservative estimate of the probability that the random variable X assumes a value within k standard deviations (k) of its mean , which is P( k <X<  +k). * P( k <X<  +k) 1

Theorem 4.11:(Chebyshev's Theorem) Let X be a random variable with mean E(X)= and variance Var(X)=2, then for k>1, we have: P( k <X<  +k)  1  P(|X | < k)  1 Example 4.22: Let X be a random variable having an unknown distribution with mean =8 and variance 2=9 (standard deviation =3). Find the following probability: (a) P(4 <X< 20) (b) P(|X8|  6)

P( k <X<  +k) 1 Solution: (a) P(4 <X< 20)= ?? P( k <X<  +k) 1 (4 <X< 20)= ( k <X<  +k)    4=  k   4= 8 k(3) or 20= + k  20= 8+ k(3)   4= 8 3k  20= 8+ 3k  3k=12  3k=12  k=4  k=4 Therefore, P(4 <X< 20) , and hence,P(4 <X< 20)  (approximately)

(b) P(|X 8|  6)= ?? P(|X8|  6)=1  P(|X8| < 6) P(|X 8| < 6)= ?? P(|X | < k) 1 (|X8| < 6) = (|X | < k) 6= k  6=3k  k=2 P(|X8| < 6)  1  P(|X8| < 6)  1   1  P(|X8| < 6)   P(|X8|  6)  Therefore, P(|X8|  6)  (approximately)

Another solution for part (b): P(|X8| < 6) = P(6 <X8 < 6) = P(6 +8<X < 6+8) = P(2<X < 14) (2<X <14) = ( k <X<  +k) 2=  k  2= 8 k(3)  2= 8 3k  3k=6  k=2 P(2<X < 14)   P(|X8| < 6)  1  P(|X8| < 6)  1   1  P(|X8| < 6)   P(|X8|  6)  Therefore, P(|X8|  6)  (approximately)