Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample.

Slides:



Advertisements
Similar presentations
BPS - 5th Ed. Chapter 241 One-Way Analysis of Variance: Comparing Several Means.
Advertisements

Objectives (BPS chapter 18) Inference about a Population Mean  Conditions for inference  The t distribution  The one-sample t confidence interval 
CHAPTER 9 Testing a Claim
Inference for a population mean BPS chapter 18 © 2006 W. H. Freeman and Company.
Introduction Comparing Two Means
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Copyright ©2011 Brooks/Cole, Cengage Learning Testing Hypotheses about Means Chapter 13.
Significance Testing Chapter 13 Victor Katch Kinesiology.
Copyright ©2006 Brooks/Cole, a division of Thomson Learning, Inc. More About Significance Tests Chapter 13.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
6/18/2015Two Sample Problems1 Chapter 18 Two Sample Problems.
BCOR 1020 Business Statistics
6/22/2015Two Sample Problems1 Chapter 19 Two Sample Problems.
Inference for Distributions - for the Mean of a Population
BCOR 1020 Business Statistics Lecture 21 – April 8, 2008.
Chapter 11: Inference for Distributions
Inferences About Process Quality
Two-sample problems for population means BPS chapter 19 © 2006 W.H. Freeman and Company.
5-3 Inference on the Means of Two Populations, Variances Unknown
CHAPTER 19: Two-Sample Problems
AP Statistics Section 13.1 A. Which of two popular drugs, Lipitor or Pravachol, helps lower bad cholesterol more? 4000 people with heart disease were.
Inferences Based on Two Samples
Lesson Comparing Two Means.
Comparing 2 population parameters Chapter 13. Introduction: Two Sample problems  Ex: How do small businesses that fail differ from those that succeed?
More About Significance Tests
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 : Independent Samples Inference for  1  2 1.
Comparing Two Population Means
Confidence Intervals and Hypothesis Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for  1  Independent Samples.
Chapter 10 Comparing Two Means Target Goal: I can use two-sample t procedures to compare two means. 10.2a h.w: pg. 626: 29 – 32, pg. 652: 35, 37, 57.
Week 111 Power of the t-test - Example In a metropolitan area, the concentration of cadmium (Cd) in leaf lettuce was measured in 7 representative gardens.
Inference for distributions: - Comparing two means IPS chapter 7.2 © 2006 W.H. Freeman and Company.
Lecture 5 Two population tests of Means and Proportions.
Business Statistics for Managerial Decision Comparing two Population Means.
Inference for a population mean BPS chapter 18 © 2006 W.H. Freeman and Company.
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
Copyright © 2013, 2010 and 2007 Pearson Education, Inc. Section Inference about Two Means: Independent Samples 11.3.
Two sample problems:  compare the responses in two groups  each group is a sample from a distinct population  responses in each group are independent.
+ Chapter 12: More About Regression Section 12.1 Inference for Linear Regression.
Objectives (BPS chapter 19) Comparing two population means  Two-sample t procedures  Examples of two-sample t procedures  Using technology  Robustness.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 1.
© Copyright McGraw-Hill 2000
Lesson Comparing Two Means. Knowledge Objectives Describe the three conditions necessary for doing inference involving two population means. Clarify.
Week111 The t distribution Suppose that a SRS of size n is drawn from a N(μ, σ) population. Then the one sample t statistic has a t distribution with n.
Inference for Distributions - for the Mean of a Population IPS Chapter 7.1 © 2009 W.H Freeman and Company.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 24 Independent Samples Chapter.
+ Unit 6: Comparing Two Populations or Groups Section 10.2 Comparing Two Means.
Inference for Distributions 7.2 Comparing Two Means © 2012 W.H. Freeman and Company.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 9 Testing a Claim 9.3 Tests About a Population.
Inference for distributions: - Comparing two means.
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Inference for the difference Between.
When  is unknown  The sample standard deviation s provides an estimate of the population standard deviation .  Larger samples give more reliable estimates.
Objectives (PSLS Chapter 18) Comparing two means (σ unknown)  Two-sample situations  t-distribution for two independent samples  Two-sample t test 
Analysis of Financial Data Spring 2012 Lecture 2: Statistical Inference 2 Priyantha Wijayatunga, Department of Statistics, Umeå University
Comparing Means: Confidence Intervals and Hypotheses Tests for the Difference between Two Population Means µ 1 - µ 2 Chapter 22 Part 2 CI HT for m 1 -
10/31/ Comparing Two Means. Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in.
Inference for Distributions Inference for the Mean of a Population PBS Chapter 7.1 © 2009 W.H Freeman and Company.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 7: Tests of significance and confidence.
CHAPTER 9 Testing a Claim
Objectives (PSLS Chapter 18)
Chapter 23 CI HT for m1 - m2: Paired Samples
18. Two-sample problems for population means (σ unknown)
The Practice of Statistics in the Life Sciences Fourth Edition
CHAPTER 9 Testing a Claim
Lesson Comparing Two Means.
Inference for Distributions 7.2 Comparing Two Means
Objectives (Section 7.2) Two sample problems:
CHAPTER 9 Testing a Claim
Objectives 7.1 Inference for the mean of a population
CHAPTER 9 Testing a Claim
Presentation transcript:

Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample t significance test  Two-sample t confidence interval  Robustness and general assumptions  Non-normal population distributions and small samples Adapted from authors’ slides © 2012 W.H. Freeman and Company

Matched pairs inference procedures Sometimes we want to compare treatments or conditions at the individual level. These situations produce two samples that are not independent – they are related to each other. The subjects of one sample are identical to, or matched (paired) with, the subjects of the other sample.  Example: Pre-test and post-test studies look at data collected on the same subjects before and after some “treatment” is performed.  Example: Twin studies often try to sort out the influence of genetic factors by comparing a variable between sets of twins.  Example: Using people matched for age, sex, and education in social studies helps to cancel out the effects of these potentially relevant variables. Except for pre/post studies, subjects should be randomized – assigned to the samples at random (within each pair), in observational studies.

For data from a matched pair design, we use the observed differences X difference = (X 1 − X 2 ) to test the difference in the two population means. The hypotheses can then be expressed as H 0 : µ difference = 0 ; H a : µ difference >0 (or <0, or ≠0) Conceptually, this is not different from our earlier tests for the mean of one population. There is just one mean, µ difference, to test.

Sweetening colas (revisited) The sweetness loss due to storage was evaluated by 10 professional tasters (comparing the sweetness before and after storage). Taster Change in Sweetness  1 −2.0  2−0.4  3−0.7  4 −2.0   6 −2.2   8 −1.2  9 −1.1  10 −2.3 We wanted to test if storage results in a loss of sweetness, thus: H 0 : μ change = 0 versus H a : μ change > 0. Although we did not mention it explicitly before, this is a pre-/post-test design and the variable is the difference: Sweetness after – Sweetness before. A matched pairs test of significance is therefore just like a one-sample test.

Does lack of caffeine increase depression? Individuals diagnosed as caffeine-dependent were deprived of caffeine-rich foods and assigned pills for 10 days. Sometimes, the pills contained caffeine and other times they contained a placebo. A depression score was determined separately for the caffeine pills (as a whole) and for the placebo pills.  There are 2 data points for each subject, but we only look at the difference.  We calculate that = 7.36; s diff = 6.92, df = 10.  We test H 0 : μ difference = 0, H a : μ difference > 0, using α = Why is a one-sided test ok?  From the t-distribution: P-value =.0027, which is quite small, in fact smaller than α. Depression is greater with the placebo than with the caffeine pills, on average.

The weight of calves  It is clear that the weight of a calf increases over time. However, it may be a surprise to learn that the increase is not immediate. This can be seen by analyzing the calf weight data that we have been studying over the past few weeks.  Look at the calf data in Statcrunch, to see how much weight each of the calves gain or loose in Week 1, it is clear that we must take the difference between the weight at week 0 – weight at week 1, and the analysis needs to be done on the differences (these can be stored in your Statcrunch spreadsheet).  Now we conduct an identical analysis to the one-sample methods but on the differences, we obtain the table below (these numbers were obtained in statcrunch). Observe to obtain the t-transform we need the mean difference and the standard error (which is a measure of spread of the mean difference), the t-transform = (mean difference - 0)/s.e.

Week 1Week 2Week 3Week 4 Average difference week 0-week n Standard Error t-transform H a : µ difference< 0No evidence Average >0 No evidence Average > 0 No evidence, pvalue = Yes, pvalue< H a : µ difference= 0Yes, pvalue< Yes, pvalue< No evidence pvalue=0.265 Yes, pvalue< H a : µ difference >0Yes, pvalue< No evidence Average < 0 No evidence, Average <0 We see that from Week 1 to Week 2 there is a drop in weight, in fact the t- transform in Week 2 is much greater than the t-transform in Week 1, so the p- value in Week 2 is a lot smaller than the p-value in Week 1. In Week 3, we do not reject any hypothesis, this suggests that the weight is back to birth weight. And from Week 4 onwards we see a gain in weight.

CI for the mean weight difference  Below we construct 95% CI (using t-dist 47 df, 2.5%) for the mean difference. The mean difference is likely to be in this interval. Week 1 CI[3.78 ±2.01×0.58] = [2.61,4.96] Week 2 CI[5.53 ±2.01×0.49] =[4.45,6.51] Week 3 CI[-0.46 ±2.01×0.69]=[-1.81,0.92] Week 4 CI[-8.33 ±2.01×0.97]=[-10.28,-6.37] Since zero does no lie in the the Intervals for Week 1, Week 2 and Week 4, this means we are rejecting the null on the two sided test (at the 5% level), and the p-value is less than 2.5%. Using the information above we can 95% construct intervals for the weight of a randomly selected healthy calf. This will be much wider than the intervals above, and will not decrease with sample size. Such intervals can help us determine whether a calf is healthy or not. How to construct such intervals is beyond this course.

 The purpose of most studies is to compare the effects of different treatments or conditions. In fact, the most effective studies are the ones that make direct comparisons from data within the studies.  Often the subjects are observed separately under the different conditions, resulting in samples that are independent. That is, the subjects of each sample are obtained and observed separately from, and without any regard to, the subjects of the other samples.  Example: Students are randomly assigned to one of two classes. Those in the first class are given iPads that they bring to class to be interactively involved. The other class has a more traditional format.  Example: An economist obtains labor data from subjects in France and from subjects in the U.S. The samples are independently obtained.  As in the matched pairs design, subjects should be randomized – assigned to the samples at random, when the study is observational. Independent samples inference

Independent sample scenarios Sample 1 is randomly obtained from Population 1 and, by an independent (separate/unrelated) means, Sample 2 is randomly obtained from Population 2. Population 1 Sample 1 Population 2 Sample 2 Population Sample 2 Sample 1 Subjects in the samples are obtained and observed separately, and without any relationship to subjects in the other sample(s). Sample 1 is randomly obtained and its subjects are given Treatment 1, and independently Sample 2 is randomly obtained from the same Population and its subjects are given Treatment 2. Independence is not the same as “different”.

Difference of two sample means  We are interested in two populations/conditions with respective parameters (μ 1, σ 1 ) and (μ 2, σ 2 ).  We obtain two independent simple random samples with respective statistics (, s 1 ) and (, s 2 ).  We use and to estimate the unknown μ 1 and μ 2.  We use s 1 and s 2 to estimate the unknown σ 1 and σ 2.  Since we wish to compare the means, we use to estimate the difference in means μ 1 − μ 2. After the original coffee sales study, the marketing firm obtained independent random samples: 34 “West Coast” shops and 29 “East Coast” shops. The firm observed the average number of customers (per day) over 2 week periods. The two sample means were = 295 and = 319. Thus, they estimate that East Coast shops have an average of 319 − 295 = 24 customers more per day than West Coast shops do.

Distribution of the difference of means  In order to do statistical inference, we must know a few things about the sampling distribution of our statistic.  The sampling distribution of has standard deviation (Mathematically, the variance of the difference is the sum of the variances of the two sample means.)  This is estimated by the standard error (SE)  For sufficiently large samples, the distribution is approximately normal.  Then the two-sample t statistic is  This statistic has an approximate t-distribution on which we will base our inferences. But the degrees of freedom is complicated …

Statisticians have a formula for estimating the proper degrees of freedom (called the unpooled df). Most statistical software will do this and you don’t need to learn it. df > smaller of (n 1 −1,n 2 −1), which can be used instead of the unpooled df. This is called the conservative degrees of freedom. It is useful for doing HW problems, but for practical problems you should use statistical software which will use the more accurate unpooled df. Two sample degrees of freedom

The strange standard error The standard error for the two sample test looks quite crazy. But it is quite logical. We recall that in the one sample test the standard error decreased as the sample size increased (this is because the sample standard deviation stayed about the same) but n grew, which meant that the standard error decreased. In the two sample case, now there are two sample sizes, both sample sizes must increase in order that the standard error decreases. Consider the following examples:  If the size of one sample stays the same, but the other decreases, the standard error does not decrease much. This is because the estimator of one of the means will not improve – consider the case that there is only one person in a group.  If the standard deviations of both populations are about the same, and overall the number of subjects in a study is fixed, then using equal number of subjects in each group leads to the smallest standard error.

The null hypothesis is that both population means μ  and μ  are equal, thus their difference is equal to zero. H 0 : μ  = μ   H 0 : μ  − μ   Either a one-sided or a two-sided alternative hypothesis can be tested. Using the value (μ  − μ   0 given in H 0, the test statistic becomes To find the P-value, we look up the appropriate probability of the t-distribution using either the unpooled df or, more conservatively, df = smaller of (n 1 − 1, n 2 − 1). Two-sample t significance test

Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in milliliters) of air that an individual can exhale in 6 seconds. We want to know whether parental smoking decreases children’s lung capacity as measured by the FVC test. Is the mean FVC lower in the population of children exposed to parental smoking than it is in children not exposed? Parental smokingFVC avg.sn Yes No FVC was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking.

Parental smokingFVC avg.sn Yes No The observed “effect” is a substantial reduction in FVC. But is it “significant”? To answer this, we calculate the t- statistic: Even if we use df = smaller of (n smoke −1, n no −1) = 29 we find that a t-statistic > gives a P-value < (for a one sided test). So our t = −3.922 is very significant. And so we reject H 0. H 0 : μ smoke = μ no ↔ H 0 : (μ smoke − μ no ) = 0 H a : μ smoke < μ no ↔ H a : (μ smoke − μ no ) < 0 (one-sided) Lung capacity is significantly impaired in children of smoking parents.

The influence of Betaine on weight  We want to investigate the effect that Betaine may have on the weight of calves. In order to determine its influence, a comparison needs to be made with a control group (calves not given Betaine). To statistically test whether Betaine has an influence, we draw two random samples, these form the two groups. In one group of calves Betaine is given and their weight is recorded over 8 weeks in the another group only milk is given and their weights recorded.  Our data set only contains 11 calves in each group, the sample sizes are both very small, therefore if there is a difference between the Betaine group and the control group, the difference must be large for to be able to detect it (reject the null), this is because the standard error for small samples will be quite large.  If you want to replicate our results, recall that TRT = B corresponds to the group given Betaine and TRT = C the calves given only milk.

 We will test that the mean difference in weights between those given Betaine those not given Betaine is zero against the alternative that the mean difference is different. We summarise the results below. sizeSample mean Mean Diff. St. devSt. errT- transfo rm P-value Group Control Group Betaine Difference We observe that the t-transform is small and the p-value is large, thus we cannot reject the null at the 10% level. This could be because there is no difference, or that there is a difference but there is too much variability in the data for us to see a significant difference with such small sample sizes.

Remember: Significance means the evidence of the data is sufficient to reject the null hypothesis (at our stated level α). Only data, and the statistics we calculate from the data, can be statistically “significant”. We can say that the sample means are “significantly different” or that the observed effect is “significant”. But the conclusion about the population means is simply “they are different.” The observed effect of −12.7 on FVC of smoking parents is significant so we conclude that the true effect μ smoke − μ no is less than zero. Having made this conclusion, or even if we have not, we can always estimate the difference using a confidence interval. Significant effect

Recall that we have two independent samples and we use the difference between the sample averages ( ) to estimate (μ  − μ  ). This estimate has standard error  The margin of error for a confidence interval of μ  − μ  is  We find t* in the line of Table D for the unpooled df (or for the smaller of (n 1 −1, n 2 −1)) and in the column for confidence level C.  The confidence interval is then computed as The interpretation of “confidence” is the same as before: it is the proportion of possible samples for which the method leads to a true statement about the parameters. Two-sample t confidence interval

Parental smokingFVC avg.sn Yes No The observed “effect” is Using df = smaller of (n smoke −1, n no −1) = 29 we find t* = The margin of error is And the 99% confidence interval is Obtain a 99% confidence interval for the smoking damage done to lungs of children exposed to parental smoking, as measured by forced vital capacity (FVC). We conclude that the FVC of lung capacity is diminished on average by a value between 3.78 and in children of smoking parents, with 99% confidence.

Can directed reading activities in the classroom help improve reading ability? A class of 21 third-graders participates in these activities for 8 weeks while a control classroom of 23 third-graders follows the same curriculum without the activities. After 8 weeks, all children take a reading test (scores below). 95% confidence interval for (μ 1 − µ 2 ), with df = 20 conservatively  t* = With 95% confidence, (µ 1 − µ 2 ) falls within 9.96 ± 8.99 or 0.97 to

95% confidence interval for the reading ability study using the more precise degrees of freedom. If you round the df, round down, in this case to 37. So t* = (interpolating the table). Note that this method gives a smaller margin of error, so it is to our advantage to use the more precise degrees of freedom. From StatCrunch: [Stat-T Statistics-Two Sample, uncheck “pool variances”]

Summary for testing μ 1 = μ 2 with independent samples  The hypotheses are identified before collecting/observing data.  To test the null hypothesis H 0 : μ 1 = μ 2, use  The P-value is obtained from the t-distribution (or t-table) with the unpooled degrees of freedom (computed).  For a one-sided test with H a : μ 1 < μ 2, P-value = area to left of t.  For a one-sided test with H a : μ 1 > μ 2, P-value = area to right of t.  For a two-sided test with H a : μ 1  μ 2, P-value = smaller of the above.  If P-value < α then H 0 is rejected and H a is accepted (one sided – if two-sided then P-value < α/2). Otherwise, H 0 is not rejected even if the evidence is on the same side as the alternative H a.  Report the P-value as well as your conclusion.  You must decide what α you will use before the study or else it is meaningless.

Summary for estimating μ 1 − μ 2 with independent samples  The single value estimate is  This has standard error  The margin of error for an interval with confidence level C is where t * is from the t-distribution using the unpooled degrees of freedom.  The confidence interval is then  You must decide what C you will use before the study or else it is meaningless.  For both hypothesis tests and confidence intervals, the key is to use the correct standard error and degrees of freedom for the problem (what is being estimated and how the data are obtained).

Coffee Shop Customers: West Coast vs. East Coast Side-by-side boxplots help us compare the two samples visually. The West Coast values are generally lower and have slightly more spread than the East Coast values. The marketing firm obtained two independent random samples: 34 “West Coast” coffee shops and 29 “East Coast” coffee shops. For each shop, the firm observed the average number of customers (per day) over 2 week periods. Here μ WC is the mean, for all West Coast coffee shops, of the variable X WC = “daily average number of customers”. Likewise, μ EC is the corresponding mean for all East Coast coffee shops.

Coffee Shop Customers (cont.)  Is there a difference in the daily average number of customers between West Coast shops and East Coast shops?  Test the hypotheses H 0 : μ WC = μ EC vs. H a : μ WC  μ EC.  We will use significance level α =  From StatCrunch, the P-value = < 0.01, so H 0 is rejected.

Coffee Shop Customers (cont.)  Find the 98% confidence interval for μ WC − μ EC.  The confidence interval can be used to conduct a two-sided test with significance level α = 1 − C.  Since the confidence interval does not contain 0, we can reject the null hypothesis that μ WC = μ EC.  Using this method to conduct a test, however, does not provide a P- value. Knowing the P-value is important so that you know the strength of your evidence, and not just whether it rejects H 0.  It is possible to modify this method in order to conduct a one-sided test instead. (Use C = 1 − 2α and reject H 0 only if the data agree with H a.)

Pooled two-sample procedures There are two versions of the two-sample t-test: one assuming equal variances (“pooled 2-sample test”) and one not assuming equal variances (“unpooled 2-sample test”) for the two populations. They have slightly different formulas and degrees of freedom. Two normally distributed populations with unequal variances The pooled (equal variance) two- sample t-test is mathematically exact. However, the assumption of equal variance is hard to check, and thus the unpooled (unequal variance) t- test is safer. In fact, the two tests give very similar results when the sample variances are not very different.

When both population have the same standard deviation σ, the pooled estimator of σ 2 is: s p replaces s 1 and s 2 in the standard error computation. The sampling distribution for the t-statistic is the t distribution with (n 1 + n 2 − 2) degrees of freedom. A level C confidence interval for µ 1 − µ 2 is (with area C between −t* and t*) To test the hypothesis H 0 : µ 1 = µ 2 against a one-sided or a two-sided alternative, compute the pooled two-sample t statistic for the t(n 1 + n 2 − 2) distribution.

Which type of test? One sample, paired samples or two independent samples?  Comparing vitamin content of bread immediately after baking vs. 3 days later (the same loaves are used on day one and 3 days later).  Comparing vitamin content of bread immediately after baking vs. 3 days later (tests made on independent loaves).  Average fuel efficiency for 2005 vehicles is 21 miles per gallon. Is average fuel efficiency higher in the new generation “green vehicles”?  Is blood pressure altered by use of an oral contraceptive? Comparing a group of women not using an oral contraceptive with a group taking it.  Review insurance records for dollar amount paid after fire damage in houses equipped with a fire extinguisher vs. houses without one. Was there a difference in the average dollar amount paid?

Cautions about the two sample t-test or interval  Using the correct standard error and degrees of freedom is critical.  As in the one sample t-test, the method assumes simple random samples.  Likewise, it also assumes the populations have normal distributions.  Skewness and outliers can make the methods inaccurate (that is, having confidence/significance level other that what they are supposed to have).  The larger the sample sizes, the less this is a problem.  It also is less of a problem if the populations have similar skewness and the two samples are close to the same size.  “Significant effect” merely means we have sufficient evidence to say the two true means are different. It does not explain why they are different or how meaningful/important the difference is.  A confidence interval is needed to determine how big the effect is.

To see how skewness affects statistical inference, we can do some simulations.  We use data from the “exponential” distribution, which is highly skewed. In StatCrunch: [Data-Simulate-Exponential, enter mean value = 1] Hazards with skewness

Hazards with skewness, cont.  We now simulate 1000 samples of size n = 25 and compute the t- statistic for each sample. In StatCrunch: [Data-Simulate-Exponential, enter 25 rows, 1000 columns, mean value = 1, and statistic sqrt(25)*(mean(Exponential)-1)/std(Exponential)] With df = 24, we find t* = for C = 95% and t* = for C = 98%. But the corresponding percentiles of the actual sampling distribution are wildly different. Only 93.5% of CI’s computed with C = 95% will contain the true mean.