Presentation is loading. Please wait.

Presentation is loading. Please wait.

Boot camp in Probability

Similar presentations


Presentation on theme: "Boot camp in Probability"— Presentation transcript:

1 Boot camp in Probability
TIM 209 Prof. Ram Akella

2 Basic Probability If we toss a coin twice {HH, HT, TH, TT}
sample space of outcomes = ? {HH, HT, TH, TT} Event – A subset of the sample space only one head comes up probability of this event: 1/2

3 Permutations Suppose that we are given n distinct objects and wish to arrange r of these on line where the order matters. The number of arrangements is equal to: Example: The rankings of the schools

4 Combination If we want to select r objects without regard the order then we use the combination. It is denoted by: Example: The toppings for the pizza

5 Venn Diagram S A B A  B

6 Probability Theorems Theorem 1 : The probability of an event lies between ‘0’ and ‘1’.             i.e.  O<= P(E)  <=  1. Proof: Let ‘S’ be the sample space and ‘E’ be the event. Then                         or  0 < =P(E) <= 1 The number of elements in ‘E’ can’t be less than ‘0’ i.e.  negative and greater than the number of elements in S.

7 Probability Theorems Theorem 2 : The probability of an impossible event is ‘0’ i.e. P (E)  =  0 Proof: Since E has no element,   n(E)  =  0 From definition of Probability:            

8 Probability Theorems Theorem 3 : The probability of a sure event is 1. i.e. P(S)  = 1.  where ‘S’ is the sure event. Proof : In sure event n(E)  =  n(S)  [ Since Number of elements in Event ‘E’ will be equal to the number of element in sample-space.] By definition of Probability :         P(S)  =     n (S)/ n (S)   =     1        P(S)  =  1

9 n(A) / N(S) < = n(B) / n(S)
Probability Theorems Theorem 4: If two events ‘A’ and ‘B’ are such that A <=B, then P(A) < =P(B). Proof:               n(A)  < =  n(B) or         n(A) / N(S)   < =    n(B) / n(S) Then P(A) < =P(B) Since ‘A’ is the sub-set of ‘B”, so from set theory number of elements in ‘A’ can’t be more than number of element in ‘B’.

10 n (E) / n (S) + n (E1) / n (S) = 1
Probability Theorems Theorem 5 : If ‘E’ is any event and E1 be the complement of event ‘E’, then P(E) + P(E1) = 1. Proof: Let ‘S’ be the sample – space, then n(E) + n(E1)  =  n(S) or     n (E) / n (S)   +    n (E1) / n (S)   = 1 or         P(E) + P(E1)  =  1

11 Computing Conditional Probabilities
Conditional probability P(A|B) is the probability of event A, given that event B has occurred: The conditional probability of A given that B has occurred Where P(A  B) = joint probability of A and B P(A) = marginal probability of A P(B) = marginal probability of B

12 Computing Joint and Marginal Probabilities
The probability of a joint event, A and B: Independent events: P(B|A) = P(B) equivalent to P(A and B) = P(A)P(B) Bayes’ Theorem: A1, A2,…An are mutually exclusive and collectively exhaustive Explain mutually exclusive, collectively exhaustive::

13 Visualizing Events Contingency Tables Tree Diagrams Ace Not Ace Total
Black Red What is Probability, Joint Event, Sample Space, Event , Complement of an event, Mutually Exclusive events : Drawing a queen of hearts, and queen of diamonds, collectively exhaustable : Total Sample Space 2 24 Ace Sample Space Black Card Not an Ace Full Deck of 52 Cards Ace Red Card Not an Ace

14 Joint Probabilities Using Contingency Table
Event Event B1 B2 Total P(A1  B2) A1 P(A1  B1) P(A1) P(A2  B1) P(A2  B2) A2 P(A2) Total P(B1) P(B2) 1 Marginal (Simple) Probabilities Joint Probabilities

15 Example Of the cars on a used car lot, 70% have air conditioning (AC) and 40% have a CD player (CD). 20% of the cars have a CD player but not AC. What is the probability that a car has a CD player, given that it has AC ?

16 Introduction to Probability Distributions
Random Variable Represents a possible numerical value from an uncertain event Random Variables Discrete Random Variable Continuous Random Variable

17 E(X) = Expected value of the discrete random variable X
Mean Variance of a discrete random variable Deviation of a discrete random variable where: E(X) = Expected value of the discrete random variable X Xi = the ith outcome of X P(Xi) = Probability of the ith occurrence of X

18 Example: Toss 2 coins, E(X) = (0 x 0.25) + (1 x 0.50) + (2 x 0.25)
X = # of heads, compute expected value of X: E(X) = (0 x 0.25) + (1 x 0.50) + (2 x 0.25) = 1.0 X P(X) compute standard deviation Possible number of heads = 0, 1, or 2

19 The Covariance The covariance measures the strength of the linear relationship between two variables The covariance: where: X = discrete variable X Xi = the ith outcome of X Y = discrete variable Y Yi = the ith outcome of Y P(XiYi) = probability of occurrence of the ith outcome of X and the ith outcome of Y

20 Correlation Coefficient
Measure of dependence of variables X and Y is given by if  = 0 then X and Y are uncorrelated

21 Probability Distributions
Discrete Probability Distributions Continuous Probability Distributions Binomial Normal Poisson Uniform Hypergeometric Exponential Multinomial

22 Binomial Distribution Formula
! c n - c P(X=c) = p (1-p) c ! ( n - c ) ! P(X=c) = probability of c successes in n trials, Random variable X denotes the number of ‘successes’ in n trials, (X = 0, 1, 2, ..., n) n = sample size (number of trials or observations) p = probability of “success” in a single trial (does not change from one trial to the next) Example: Flip a coin four times, let x = # heads: n = 4 p = 0.5 1 - p = ( ) = 0.5 X = 0, 1, 2, 3, 4

23 Binomial Distribution
The shape of the binomial distribution depends on the values of p and n Mean n = 5 p = 0.1 P(X) .6 Here, n = 5 and p = 0.1 .4 .2 X 1 2 3 4 5 n = 5 p = 0.5 P(X) Here, n = 5 and p = 0.5 .6 .4 .2 X 1 2 3 4 5

24 Binomial Distribution Characteristics
Mean Variance and Standard Deviation Where n = sample size p = probability of success (1 – p) = probability of failure

25 Multinomial Distribution
P(Xi=c..Xk=Ck) = probability of having xi outputs in n trials, Random variable Xi denotes the number of ‘successes’ in n trials, (X = 0, 1, 2, ..., n) n = sample size (number of trials or observations) p= probability of “success” Example: You have 5 red, 4 blue and 3 yellow balls times, let xi = # balls: n =12 p =[ 0.416, 0.33, 0.25]

26 The Normal Distribution
‘Bell Shaped’ Symmetrical Mean, Median and Mode are Equal Location is determined by the mean, μ Spread is determined by the standard deviation. The random variable has an infinite theoretical range: +  to   f(X) σ X μ Mean = Median = Mode

27 Need to transform X units into Z units
The formula for the normal probability density function is Any normal distribution (with any mean and standard deviation combination) can be transformed into the standardized normal distribution (Z). Where Z=(X-mean)/std dev. Need to transform X units into Z units Where e = the mathematical constant approximated by π = the mathematical constant approximated by μ = the population mean σ = the population standard deviation X = any value of the continuous variable

28 Comparing X and Z units 100 200 X 2.0 Z
(μ = 100, σ = 50) 2.0 Z (μ = 0, σ = 1) Note that the distribution is the same, only the scale has changed. We can express the problem in original units (X) or in standardized units (Z)

29 Finding Normal Probabilities
Suppose X is normal with mean 8.0 and standard deviation 5.0 Find P(X < 8.6) = P(8 < X < 8.6) X 8.0 8.6

30 The Standardized Normal Table
The column gives the value of Z to the second decimal point Z … 0.0 0.1 The row shows the value of Z to the first decimal point The value within the table gives the probability from Z =   up to the desired Z value . 2.0 .4772 2.0 P(Z < 2.00) =

31 Relationship between Binomial & Normal distributions
If n is large and if neither p nor q is too close to zero, the binomial distribution can be closely approximated by a normal distribution with standardized normal variable given by X is the random variable giving the no. of successes in n Bernoulli trials and p is the probability of success. Z is asymptotically normal

32 Normal Approximation to the Binomial Distribution
The binomial distribution is a discrete distribution, but the normal is continuous To use the normal to approximate the binomial, accuracy is improved if you use a correction for continuity adjustment Example: X is discrete in a binomial distribution, so P(X = 4) can be approximated with a continuous normal distribution by finding P(3.5 < X < 4.5)

33 Normal Approximation to the Binomial Distribution
(continued) The closer p is to 0.5, the better the normal approximation to the binomial The larger the sample size n, the better the normal approximation to the binomial General rule: The normal distribution can be used to approximate the binomial distribution if np ≥ 5 and n(1 – p) ≥ 5

34 Normal Approximation to the Binomial Distribution
(continued) The mean and standard deviation of the binomial distribution are μ = np Transform binomial to normal using the formula:

35 Using the Normal Approximation to the Binomial Distribution
If n = 1000 and p = 0.2, what is P(X ≤ 180)? Approximate P(X ≤ 180) using a continuity correction adjustment: P(X ≤ 180.5) Transform to standardized normal: So P(Z ≤ -1.54) = X 180.5 200 Z -1.54

36 Poisson Distribution where:
X = discrete random variable (number of events in an area of opportunity)  = expected number of events (constant) e = base of the natural logarithm system ( )

37 Poisson Distribution Characteristics
Mean Variance and Standard Deviation where  = expected number of events

38 Poisson Distribution Shape
The shape of the Poisson Distribution depends on the parameter  :  = 0.50  = 3.00

39 Relationship between Poisson & Normal distributions
In a Binomial Distribution if n is large and p is small ( probability of success ) then it approximates to Poisson Distribution with = np.

40 Relationship b/w Poisson & Normal distributions
Poisson distribution approaches normal distribution as with standardized normal variable given by

41 Are there any other distributions besides binomial and Poisson that have the normal distribution as the limiting case?

42 The Uniform Distribution
The uniform distribution is a probability distribution that has equal probabilities for all possible outcomes of the random variable Also called a rectangular distribution

43 Uniform Distribution Example
Example: Uniform probability distribution over the range 2 ≤ X ≤ 6: 1 f(X) = = for 2 ≤ X ≤ 6 b-a f(X) 0.25 X 2 6

44 Sampling Distributions
Sampling Distribution of the Mean Sampling Distribution of the Proportion

45 Sampling Distributions
A sampling distribution is a distribution of all of the possible values of a statistic for a given size sample selected from a population

46 Developing a Sampling Distribution
Assume there is a population … Population size N=4 Random variable, X, is age of individuals Values of X: 18, 20, 22, 24 (years) D A C B

47 Developing a Sampling Distribution
(continued) Summary Measures for the Population Distribution: P(x) .3 .2 .1 x A B C D Uniform Distribution

48 Now consider all possible samples of size n=2
Sampling Distribution of Means (continued) Now consider all possible samples of size n=2 1st Obs 2nd Observation 18 20 22 24 18,18 18,20 18,22 18,24 20,18 20,20 20,22 20,24 22,18 22,20 22,22 22,24 24,18 24,20 24,22 24,24 16 Sample Means 16 possible samples (sampling with replacement)

49 Summary Measures of this Sampling Distribution:
Sampling Distribution of Means (continued) Summary Measures of this Sampling Distribution:

50 Comparing the Population with its Sampling Distribution
Sample Means Distribution n = 16 _ P(X) P(X) .3 .3 .2 .2 .1 .1 _ X A B C D X

51 Standard Error, Mean and Variance
Different samples of the same size from the same population will yield different sample means A measure of the variability in the mean from sample to sample is given by the Standard Error of the Mean: (This assumes that sampling is with replacement or sampling is without replacement from an infinite population) Note that the standard error of the mean decreases as the sample size increases

52 Standard Error, Mean and Variance
If a population is normal with mean μ and standard deviation σ, the sampling distribution of is also normally distributed with Z Value = unit normal distribution of a sampling distribution of

53 Sampling Distribution Properties
Normal Population Distribution (i.e is unbiased ) Normal Sampling Distribution (has the same mean)

54 Sampling Distribution Properties
(continued) As n increases, decreases Larger sample size Smaller sample size

55 If the Population is not Normal
We can apply the Central Limit Theorem: Even if the population is not normal, …sample means from the population will be approximately normal as long as the sample size is large enough. Properties of the sampling distribution: and

56 Central Limit Theorem the sampling distribution becomes almost normal regardless of shape of population As the sample size gets large enough… n↑

57 If the Population is not Normal
(continued) Population Distribution Sampling distribution properties: Central Tendency Sampling Distribution (becomes normal as n increases) Variation Larger sample size Smaller sample size

58 How Large is Large Enough?
For most distributions, n > 30 will give a sampling distribution that is nearly normal For fairly symmetric distributions, n > 15 For normal population distributions, the sampling distribution of the mean is always normally distributed

59 Example Suppose a population has mean μ = 8 and standard deviation σ = 3. Suppose a random sample of size n = 36 is selected. What is the probability that the sample mean is between 7.8 and 8.2?

60 Example (continued) Solution: Even if the population is not normally distributed, the central limit theorem can be used (n > 30) … so the sampling distribution of is approximately normal … with mean = 8 …and standard deviation

61 Example Solution (continued): (continued) Z X Population Distribution
Sampling Distribution Standard Normal Distribution ? ? ? ? ? ? ? ? ? ? Sample Standardize ? ? Z X

62 Population Proportions
π = the proportion of the population having some characteristic Sample proportion ( p ) provides an estimate of π: 0 ≤ p ≤ 1 p has a binomial distribution (assuming sampling with replacement from a finite population or without replacement from an infinite population)

63 Sampling Distribution of Proportions
For large values of n (n>=30), the sampling distribution is very nearly a normal distribution. Sampling Distribution P( ps) .3 .2 .1 p (where π = population proportion)

64 Example If the true proportion of voters who support Proposition A is π = 0.4, what is the probability that a sample of size 200 yields a sample proportion between 0.40 and 0.45? i.e.: if π = 0.4 and n = 200, what is P(0.40 ≤ p ≤ 0.45) ?

65 Example if π = 0.4 and n = 200, what is P(0.40 ≤ p ≤ 0.45) ? Find :
(continued) if π = 0.4 and n = 200, what is P(0.40 ≤ p ≤ 0.45) ? Find : Convert to standard normal:

66 Standardized Normal Distribution
Example (continued) if π = 0.4 and n = 200, what is P(0.40 ≤ p ≤ 0.45) ? Use standard normal table: P(0 ≤ Z ≤ 1.44) = Standardized Normal Distribution Sampling Distribution 0.4251 Standardize 0.40 0.45 1.44 p Z

67 Point and Interval Estimates
A point estimate is a single number, a confidence interval provides additional information about variability Upper Confidence Limit Lower Confidence Limit Point Estimate Width of confidence interval

68 Point Estimates p π X μ Mean Proportion We can estimate a
Population Parameter … with a Sample Statistic (a Point Estimate) Mean Proportion p π X μ How much uncertainty is associated with a point estimate of a population parameter? An interval estimate provides more information about a population characteristic than does a point estimate Such interval estimates are called confidence intervals

69 Confidence Interval Estimate
An interval gives a range of values: Takes into consideration variation in sample statistics from sample to sample Based on observations from 1 sample Gives information about closeness to unknown population parameters Stated in terms of level of confidence Can never be 100% confident

70 Estimation Process Random Sample Population
I am 95% confident that μ is between 40 & 60. Random Sample Population Mean X = 50 (mean, μ, is unknown) Sample

71 Point Estimate ± (Critical Value)(Standard Error)
General Formula The general formula for all confidence intervals is: Point Estimate ± (Critical Value)(Standard Error)

72 Confidence Interval for μ (σ Known)
Assumptions Population standard deviation σ is known Population is normally distributed If population is not normal, use large sample Confidence interval estimate: where is the point estimate Z is the normal distribution critical value on a particular level of confidence is the standard error

73 Finding the Critical Value, Z
Consider a 95% confidence interval: Z= -1.96 Z= 1.96 Z units: Lower Confidence Limit Upper Confidence Limit X units: Point Estimate Point Estimate

74 Intervals and Level of Confidence
Sampling Distribution of the Mean x Intervals extend from to x1 (1-)x100% of intervals constructed contain μ; ()x100% do not. x2 Confidence Intervals

75 Example A sample of 11 circuits from a large normal population has a mean resistance of 2.20 ohms. We know from past testing that the population standard deviation is 0.35 ohms. Determine a 95% confidence interval for the true mean resistance of the population.

76 Example (continued) A sample of 11 circuits from a large normal population has a mean resistance of 2.20 ohms. We know from past testing that the population standard deviation is 0.35 ohms. Solution:

77 Interpretation We are 95% confident that the true mean resistance is between and ohms Although the true mean may or may not be in this interval, 95% of intervals formed in this manner will contain the true mean

78 Confidence Interval for μ (σ Unknown)
If the population standard deviation σ is unknown, we can substitute the sample standard deviation, S This introduces extra uncertainty, since S is variable from sample to sample So we use the t distribution instead of the normal distribution

79 Confidence Interval for μ (σ Unknown)
(continued) Assumptions Population standard deviation is unknown Population is normally distributed If population is not normal, use large sample Use Student’s t Distribution Confidence Interval Estimate: (where t is the critical value of the t distribution with n -1 degrees of freedom and an area of α/2 in each tail)

80 Student’s t Distribution
The t is a family of distributions The t value depends on degrees of freedom (d.f.) Number of observations that are free to vary after sample mean has been calculated d.f. = n - 1

81 If the mean of these three values is 8.0, then X3 must be 9
DOF ::Idea: Number of observations that are free to vary after sample mean has been calculated Example: Suppose the mean of 3 numbers is 8.0. If the mean of these three values is 8.0, then X3 must be 9 (i.e., X3 is not free to vary) Let X1 = 7 Let X2 = 8 What is X3? Here, n = 3, so degrees of freedom = n – 1 = 3 – 1 = 2 (2 values can be any numbers, but the third is not free to vary for a given mean)

82 Student’s t Distribution
Note: t Z as n increases Standard Normal (t with df = ∞) t (df = 13) t-distributions are bell-shaped and symmetric, but have ‘fatter’ tails than the normal t (df = 5) t

83 Student’s t Table Let: n = df = n - 1 = 2 90% confidence df .25 .10 .05 1 1.000 3.078 6.314 2 0.817 1.886 2.920 0.05 3 0.765 1.638 2.353 The body of the table contains t values, not probabilities t 2.920

84 Example A random sample of n = 25 has X = 50 and
S = 8. Form a 95% confidence interval for μ d.f. = n – 1 = 24, so The confidence interval is ≤ μ ≤

85 What is a Hypothesis? A hypothesis is a claim (assumption) about a
population parameter: population mean population proportion Example: The mean monthly cell phone bill of this city is μ = $42 Example: The proportion of adults in this city with cell phones is π = 0.68

86 The Null Hypothesis, H0 States the claim or assertion to be tested
Example: The average number of TV sets in U.S. Homes is equal to three ( ) Is always about a population parameter, not about a sample statistic

87 The Null Hypothesis, H0 (continued) Begin with the assumption that the null hypothesis is true Always contains “=” , “≤” or “” sign May or may not be rejected

88 The Alternative Hypothesis, H1
Is the opposite of the null hypothesis e.g., The average number of TV sets in U.S. homes is not equal to 3 ( H1: μ ≠ 3 ) Never contains the “=” , “≤” or “” sign May or may not be proven Is generally the hypothesis that the researcher is trying to prove

89 Hypothesis Testing Process
Claim: the population mean age is 50. (Null Hypothesis: Population H0: μ = 50 ) Now select a random sample X Is = 20 likely if μ = 50? Suppose the sample If not likely, REJECT mean age is 20: X = 20 Sample Null Hypothesis

90 Level of Significance and the Rejection Region
Represents critical value a a H0: μ = 3 H1: μ ≠ 3 /2 /2 Rejection region is shaded Two-tail test H0: μ ≤ 3 H1: μ > 3 a Upper-tail test H0: μ ≥ 3 H1: μ < 3 a Lower-tail test

91 Hypothesis Testing If we know that some data comes from a certain distribution, but the parameter is unknown, we might try to predict what the parameter is. Hypothesis testing is about working out how likely our predictions are. We then perform a test to decide whether or not we should reject the null hypothesis in favor of the alternative. We test how likely it is that the value we were given could have come from the distribution with this predicted parameter. A one-tailed test looks for an increase or decrease in the parameter whereas a two-tailed test looks for any change in the parameter (which can be any change- increase or decrease). We can perform the test at any level (usually 1%, 5% or 10%). For example, performing the test at a 5% level means that there is a 5% chance of wrongly rejecting H0. If we perform the test at the 5% level and decide to reject the null hypothesis, we say "there is significant evidence at the 5% level to suggest the hypothesis is false". For example, suppose we are told that the value of 3 has come from a Poisson distribution. We might want to test the null hypothesis that the parameter (which is the mean) of the Poisson distribution is 9. So we work out how likely it is that the value of 3 could have come from a Poisson distribution with parameter 9. If it"s not very likely, we reject the null hypothesis in favour of the alternative.

92 Hypothesis Testing Example
Test the claim that the true mean # of TV sets in US homes is equal to 3. (Assume σ = 0.8) 1. State the appropriate null and alternative hypotheses H0: μ = H1: μ ≠ 3 (This is a two-tail test) 2. Specify the desired level of significance and the sample size Suppose that  = 0.05 and n = 100 are chosen for this test

93 Hypothesis Testing Example
(continued) 3. Determine the appropriate technique σ is known so this is a Z test. 4. Determine the critical values For  = 0.05 the critical Z values are ±1.96 5. Collect the data and compute the test statistic Suppose the sample results are n = 100, X = (σ = 0.8 is assumed known) So the test statistic is:

94 Hypothesis Testing Example
(continued) 6. Is the test statistic in the rejection region?  = 0.05/2  = 0.05/2 Reject H0 if Z < or Z > 1.96; otherwise do not reject H0 Reject H0 Do not reject H0 Reject H0 -Z= -1.96 +Z= +1.96 Here, Z = -2.0 < -1.96, so the test statistic is in the rejection region

95 Hypothesis Testing Example
(continued) 6(continued). Reach a decision and interpret the result  = 0.05/2  = 0.05/2 Reject H0 Do not reject H0 Reject H0 -Z= -1.96 +Z= +1.96 -2.0 Since Z = -2.0 < -1.96, we reject the null hypothesis and conclude that there is sufficient evidence that the mean number of TVs in US homes is not equal to 3

96 One-Tail Tests In many cases, the alternative hypothesis focuses on a particular direction This is a lower-tail test since the alternative hypothesis is focused on the lower tail below the mean of 3 H0: μ ≥ 3 H1: μ < 3 This is an upper-tail test since the alternative hypothesis is focused on the upper tail above the mean of 3 H0: μ ≤ 3 H1: μ > 3

97 Example: Upper-Tail Z Test for Mean ( Known)
A phone industry manager thinks that customer monthly cell phone bills have increased, and now average over $52 per month. The company wishes to test this claim. (Assume  = 10 is known) Form hypothesis test: H0: μ ≤ the average is not over $52 per month H1: μ > the average is greater than $52 per month (i.e., sufficient evidence exists to support the manager’s claim)

98 Find the rejection region:
Suppose that  = 0.10 is chosen for this test Find the rejection region: Reject H0  = 0.10 Do not reject H0 Reject H0 1.28 Reject H0 if Z > 1.28

99 Review: One-Tail Critical Value
Standardized Normal Distribution Table (Portion) What is Z given a = 0.10? 0.90 0.10 .08 Z .07 .09 a = 0.10 1.1 .8790 .8810 .8830 0.90 1.2 .8980 .8997 .9015 z 1.28 1.3 .9147 .9162 .9177 Critical Value = 1.28

100 t Test of Hypothesis for the Mean (σ Unknown)
Convert sample statistic ( ) to a t test statistic X Hypothesis Tests for   Known σ Known  Unknown σ Unknown (Z test) (t test) The test statistic is:

101 Example: Two-Tail Test ( Unknown)
The average cost of a hotel room in New York is said to be $168 per night. A random sample of 25 hotels resulted in X = $ and S = $ Test at the  = level. (Assume the population distribution is normal) H0: μ = H1: μ ¹ 168

102 Example Solution: Two-Tail Test
H0: μ = H1: μ ¹ 168 a/2=.025 a/2=.025 a = 0.05 n = 25  is unknown, so use a t statistic Critical Val:t24 = ± Reject H0 Do not reject H0 Reject H0 t n-1,α/2 -t n-1,α/2 2.0639 1.46 Do not reject H0: not sufficient evidence that true mean cost is different than $168

103 Errors in Making Decisions
Type I Error Reject a true null hypothesis Considered a serious type of error The probability of Type I Error is  Called level of significance of the test Set by the researcher in advance

104 Errors in Making Decisions
(continued) Type II Error Fail to reject a false null hypothesis The probability of Type II Error is β

105 Type II Error Here, β = P( X  cutoff ) if μ = 50
In a hypothesis test, a type II error occurs when the null hypothesis H0 is not rejected when it is in fact false. Suppose we do not reject H0: μ  52 when in fact the true mean is μ = 50 Here, β = P( X  cutoff ) if μ = 50 β 50 52 Reject H0: μ  52 Do not reject H0 : μ  52

106 Calculating β Suppose n = 64 , σ = 6 , and  = .05
(for H0 : μ  52) So β = P( x  ) if μ = 50 50 50.766 52 Reject H0: μ  52 Do not reject H0 : μ  52

107 Calculating β and Power of the test
(continued) Suppose n = 64 , σ = 6 , and  = 0.05 Power = 1 - β = Probability of type II error: β = 50 50.766 52 The probability of correctly rejecting a false null hypothesis is Reject H0: μ  52 Do not reject H0 : μ  52

108 p-value The probability value (p-value) of a statistical hypothesis test is the probability of wrongly rejecting the null hypothesis if it is in fact true. It is equal to the significance level of the test for which we would only just reject the null hypothesis. The p-value is compared with the actual significance level of our test and, if it is smaller, the result is significant. if the null hypothesis were to be rejected at the 5% significance level, this would be reported as "p < 0.05". Small p-values suggest that the null hypothesis is unlikely to be true. The smaller it is, the more convincing is the rejection of the null hypothesis.

109 p-Value Example Example: How likely is it to see a sample mean of 2.84 (or something further from the mean, in either direction) if the true mean is  = 3.0? n = 100, σ = 0.8 X = 2.84 is translated to a Z score of Z = -2.0 /2 = 0.025 /2 = 0.025 0.0228 0.0228 p-value = = -1.96 1.96 Z -2.0 2.0

110 p-Value Example Compare the p-value with  Here: p-value = 0.0456
(continued) Compare the p-value with  If p-value <  , reject H0 If p-value   , do not reject H0 Here: p-value =  = 0.05 Since < 0.05, we reject the null hypothesis /2 = 0.025 /2 = 0.025 0.0228 0.0228 -1.96 1.96 Z -2.0 2.0


Download ppt "Boot camp in Probability"

Similar presentations


Ads by Google