Presentation is loading. Please wait.

Presentation is loading. Please wait.

Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample.

Similar presentations


Presentation on theme: "Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample."— Presentation transcript:

1 Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample t significance test  Two-sample t confidence interval  Robustness and general assumptions  Non-normal population distributions and small samples Adapted from authors’ slides © 2012 W.H. Freeman and Company

2 Matched pairs inference procedures Sometimes we want to compare treatments or conditions at the individual level. These situations produce two samples that are not independent – they are related to each other. The subjects of one sample are identical to, or matched (paired) with, the subjects of the other sample.  Example: Pre-test and post-test studies look at data collected on the same subjects before and after some “treatment” is performed.  Example: Twin studies often try to sort out the influence of genetic factors by comparing a variable between sets of twins.  Example: Using people matched for age, sex, and education in social studies helps to cancel out the effects of these potentially relevant variables. Except for pre/post studies, subjects should be randomized – assigned to the samples at random (within each pair), in observational studies.

3 For data from a matched pair design, we use the observed differences X difference = (X 1 − X 2 ) to test the difference in the two population means. The hypotheses can then be expressed as H 0 : µ difference = 0 ; H a : µ difference >0 (or <0, or ≠0) Conceptually, this is not different from our earlier tests for the mean of one population. There is just one mean, µ difference, to test.

4 Sweetening colas (revisited) The sweetness loss due to storage was evaluated by 10 professional tasters (comparing the sweetness before and after storage). Taster Change in Sweetness  1 −2.0  2−0.4  3−0.7  4 −2.0  5 0.4  6 −2.2  7 1.3  8 −1.2  9 −1.1  10 −2.3 We wanted to test if storage results in a loss of sweetness, thus: H 0 : μ change = 0 versus H a : μ change > 0. Although we did not mention it explicitly before, this is a pre-/post-test design and the variable is the difference: Sweetness after – Sweetness before. A matched pairs test of significance is therefore just like a one-sample test.

5 Does lack of caffeine increase depression? Individuals diagnosed as caffeine-dependent were deprived of caffeine-rich foods and assigned pills for 10 days. Sometimes, the pills contained caffeine and other times they contained a placebo. A depression score was determined separately for the caffeine pills (as a whole) and for the placebo pills.  There are 2 data points for each subject, but we only look at the difference.  We calculate that = 7.36; s diff = 6.92, df = 10.  We test H 0 : μ difference = 0, H a : μ difference > 0, using α = 0.05. Why is a one-sided test ok?  From the t-distribution: P-value =.0027, which is quite small, in fact smaller than α. Depression is greater with the placebo than with the caffeine pills, on average.

6 The weight of calves  It is clear that the weight of a calf increases over time. However, it may be a surprise to learn that the increase is not immediate. This can be seen by analyzing the calf weight data that we have been studying over the past few weeks.  Look at the calf data in Statcrunch, to see how much weight each of the calves gain or loose in Week 1, it is clear that we must take the difference between the weight at week 0 – weight at week 1, and the analysis needs to be done on the differences (these can be stored in your Statcrunch spreadsheet).  Now we conduct an identical analysis to the one-sample methods but on the differences, we obtain the table below (these numbers were obtained in statcrunch). Observe to obtain the t-transform we need the mean difference and the standard error (which is a measure of spread of the mean difference), the t-transform = (mean difference - 0)/s.e.

7 Week 1Week 2Week 3Week 4 Average difference week 0-week n 3.785.53-0.46-8.33 Standard Error0.580.490.690.97 t-transform6.48811.13-0.66-8.62 H a : µ difference< 0No evidence Average >0 No evidence Average > 0 No evidence, pvalue = 0.256 Yes, pvalue<0.0001 H a : µ difference= 0Yes, pvalue<0.0001 Yes, pvalue<0.0001 No evidence pvalue=0.265 Yes, pvalue<0.0001 H a : µ difference >0Yes, pvalue<0.0001 No evidence Average < 0 No evidence, Average <0 We see that from Week 1 to Week 2 there is a drop in weight, in fact the t- transform in Week 2 is much greater than the t-transform in Week 1, so the p- value in Week 2 is a lot smaller than the p-value in Week 1. In Week 3, we do not reject any hypothesis, this suggests that the weight is back to birth weight. And from Week 4 onwards we see a gain in weight.

8 CI for the mean weight difference  Below we construct 95% CI (using t-dist 47 df, 2.5%) for the mean difference. The mean difference is likely to be in this interval. Week 1 CI[3.78 ±2.01×0.58] = [2.61,4.96] Week 2 CI[5.53 ±2.01×0.49] =[4.45,6.51] Week 3 CI[-0.46 ±2.01×0.69]=[-1.81,0.92] Week 4 CI[-8.33 ±2.01×0.97]=[-10.28,-6.37] Since zero does no lie in the the Intervals for Week 1, Week 2 and Week 4, this means we are rejecting the null on the two sided test (at the 5% level), and the p-value is less than 2.5%. Using the information above we can 95% construct intervals for the weight of a randomly selected healthy calf. This will be much wider than the intervals above, and will not decrease with sample size. Such intervals can help us determine whether a calf is healthy or not. How to construct such intervals is beyond this course.

9  The purpose of most studies is to compare the effects of different treatments or conditions. In fact, the most effective studies are the ones that make direct comparisons from data within the studies.  Often the subjects are observed separately under the different conditions, resulting in samples that are independent. That is, the subjects of each sample are obtained and observed separately from, and without any regard to, the subjects of the other samples.  Example: Students are randomly assigned to one of two classes. Those in the first class are given iPads that they bring to class to be interactively involved. The other class has a more traditional format.  Example: An economist obtains labor data from subjects in France and from subjects in the U.S. The samples are independently obtained.  As in the matched pairs design, subjects should be randomized – assigned to the samples at random, when the study is observational. Independent samples inference

10 Independent sample scenarios Sample 1 is randomly obtained from Population 1 and, by an independent (separate/unrelated) means, Sample 2 is randomly obtained from Population 2. Population 1 Sample 1 Population 2 Sample 2 Population Sample 2 Sample 1 Subjects in the samples are obtained and observed separately, and without any relationship to subjects in the other sample(s). Sample 1 is randomly obtained and its subjects are given Treatment 1, and independently Sample 2 is randomly obtained from the same Population and its subjects are given Treatment 2. Independence is not the same as “different”.

11 Difference of two sample means  We are interested in two populations/conditions with respective parameters (μ 1, σ 1 ) and (μ 2, σ 2 ).  We obtain two independent simple random samples with respective statistics (, s 1 ) and (, s 2 ).  We use and to estimate the unknown μ 1 and μ 2.  We use s 1 and s 2 to estimate the unknown σ 1 and σ 2.  Since we wish to compare the means, we use to estimate the difference in means μ 1 − μ 2. After the original coffee sales study, the marketing firm obtained independent random samples: 34 “West Coast” shops and 29 “East Coast” shops. The firm observed the average number of customers (per day) over 2 week periods. The two sample means were = 295 and = 319. Thus, they estimate that East Coast shops have an average of 319 − 295 = 24 customers more per day than West Coast shops do.

12 Distribution of the difference of means  In order to do statistical inference, we must know a few things about the sampling distribution of our statistic.  The sampling distribution of has standard deviation (Mathematically, the variance of the difference is the sum of the variances of the two sample means.)  This is estimated by the standard error (SE)  For sufficiently large samples, the distribution is approximately normal.  Then the two-sample t statistic is  This statistic has an approximate t-distribution on which we will base our inferences. But the degrees of freedom is complicated …

13 Statisticians have a formula for estimating the proper degrees of freedom (called the unpooled df). Most statistical software will do this and you don’t need to learn it. df > smaller of (n 1 −1,n 2 −1), which can be used instead of the unpooled df. This is called the conservative degrees of freedom. It is useful for doing HW problems, but for practical problems you should use statistical software which will use the more accurate unpooled df. Two sample degrees of freedom

14 The strange standard error The standard error for the two sample test looks quite crazy. But it is quite logical. We recall that in the one sample test the standard error decreased as the sample size increased (this is because the sample standard deviation stayed about the same) but n grew, which meant that the standard error decreased. In the two sample case, now there are two sample sizes, both sample sizes must increase in order that the standard error decreases. Consider the following examples:  If the size of one sample stays the same, but the other decreases, the standard error does not decrease much. This is because the estimator of one of the means will not improve – consider the case that there is only one person in a group.  If the standard deviations of both populations are about the same, and overall the number of subjects in a study is fixed, then using equal number of subjects in each group leads to the smallest standard error.

15 The null hypothesis is that both population means μ  and μ  are equal, thus their difference is equal to zero. H 0 : μ  = μ   H 0 : μ  − μ   Either a one-sided or a two-sided alternative hypothesis can be tested. Using the value (μ  − μ   0 given in H 0, the test statistic becomes To find the P-value, we look up the appropriate probability of the t-distribution using either the unpooled df or, more conservatively, df = smaller of (n 1 − 1, n 2 − 1). Two-sample t significance test

16 Does smoking damage the lungs of children exposed to parental smoking? Forced vital capacity (FVC) is the volume (in milliliters) of air that an individual can exhale in 6 seconds. We want to know whether parental smoking decreases children’s lung capacity as measured by the FVC test. Is the mean FVC lower in the population of children exposed to parental smoking than it is in children not exposed? Parental smokingFVC avg.sn Yes75.59.330 No88.215.130 FVC was obtained for a sample of children not exposed to parental smoking and a group of children exposed to parental smoking.

17 Parental smokingFVC avg.sn Yes75.59.330 No88.215.130 The observed “effect” is a substantial reduction in FVC. But is it “significant”? To answer this, we calculate the t- statistic: Even if we use df = smaller of (n smoke −1, n no −1) = 29 we find that a t-statistic > 3.659 gives a P-value < 0.0005 (for a one sided test). So our t = −3.922 is very significant. And so we reject H 0. H 0 : μ smoke = μ no ↔ H 0 : (μ smoke − μ no ) = 0 H a : μ smoke < μ no ↔ H a : (μ smoke − μ no ) < 0 (one-sided) Lung capacity is significantly impaired in children of smoking parents.

18 The influence of Betaine on weight  We want to investigate the effect that Betaine may have on the weight of calves. In order to determine its influence, a comparison needs to be made with a control group (calves not given Betaine). To statistically test whether Betaine has an influence, we draw two random samples, these form the two groups. In one group of calves Betaine is given and their weight is recorded over 8 weeks in the another group only milk is given and their weights recorded.  Our data set only contains 11 calves in each group, the sample sizes are both very small, therefore if there is a difference between the Betaine group and the control group, the difference must be large for to be able to detect it (reject the null), this is because the standard error for small samples will be quite large.  If you want to replicate our results, recall that TRT = B corresponds to the group given Betaine and TRT = C the calves given only milk.

19  We will test that the mean difference in weights between those given Betaine those not given Betaine is zero against the alternative that the mean difference is different. We summarise the results below. sizeSample mean Mean Diff. St. devSt. errT- transfo rm P-value Group Control 11144.4516.12 Group Betaine 11139.5415.51 Difference4.916.740.7270.24 We observe that the t-transform is small and the p-value is large, thus we cannot reject the null at the 10% level. This could be because there is no difference, or that there is a difference but there is too much variability in the data for us to see a significant difference with such small sample sizes.

20 Remember: Significance means the evidence of the data is sufficient to reject the null hypothesis (at our stated level α). Only data, and the statistics we calculate from the data, can be statistically “significant”. We can say that the sample means are “significantly different” or that the observed effect is “significant”. But the conclusion about the population means is simply “they are different.” The observed effect of −12.7 on FVC of smoking parents is significant so we conclude that the true effect μ smoke − μ no is less than zero. Having made this conclusion, or even if we have not, we can always estimate the difference using a confidence interval. Significant effect

21 Recall that we have two independent samples and we use the difference between the sample averages ( ) to estimate (μ  − μ  ). This estimate has standard error  The margin of error for a confidence interval of μ  − μ  is  We find t* in the line of Table D for the unpooled df (or for the smaller of (n 1 −1, n 2 −1)) and in the column for confidence level C.  The confidence interval is then computed as The interpretation of “confidence” is the same as before: it is the proportion of possible samples for which the method leads to a true statement about the parameters. Two-sample t confidence interval

22 Parental smokingFVC avg.sn Yes75.59.330 No88.215.130 The observed “effect” is Using df = smaller of (n smoke −1, n no −1) = 29 we find t* = 2.756. The margin of error is And the 99% confidence interval is Obtain a 99% confidence interval for the smoking damage done to lungs of children exposed to parental smoking, as measured by forced vital capacity (FVC). We conclude that the FVC of lung capacity is diminished on average by a value between 3.78 and 21.62 in children of smoking parents, with 99% confidence.

23 Can directed reading activities in the classroom help improve reading ability? A class of 21 third-graders participates in these activities for 8 weeks while a control classroom of 23 third-graders follows the same curriculum without the activities. After 8 weeks, all children take a reading test (scores below). 95% confidence interval for (μ 1 − µ 2 ), with df = 20 conservatively  t* = 2.086. With 95% confidence, (µ 1 − µ 2 ) falls within 9.96 ± 8.99 or 0.97 to 18.95.

24 95% confidence interval for the reading ability study using the more precise degrees of freedom. If you round the df, round down, in this case to 37. So t* = 2.025 (interpolating the table). Note that this method gives a smaller margin of error, so it is to our advantage to use the more precise degrees of freedom. From StatCrunch: [Stat-T Statistics-Two Sample, uncheck “pool variances”]

25 Summary for testing μ 1 = μ 2 with independent samples  The hypotheses are identified before collecting/observing data.  To test the null hypothesis H 0 : μ 1 = μ 2, use  The P-value is obtained from the t-distribution (or t-table) with the unpooled degrees of freedom (computed).  For a one-sided test with H a : μ 1 < μ 2, P-value = area to left of t.  For a one-sided test with H a : μ 1 > μ 2, P-value = area to right of t.  For a two-sided test with H a : μ 1  μ 2, P-value = smaller of the above.  If P-value < α then H 0 is rejected and H a is accepted (one sided – if two-sided then P-value < α/2). Otherwise, H 0 is not rejected even if the evidence is on the same side as the alternative H a.  Report the P-value as well as your conclusion.  You must decide what α you will use before the study or else it is meaningless.

26 Summary for estimating μ 1 − μ 2 with independent samples  The single value estimate is  This has standard error  The margin of error for an interval with confidence level C is where t * is from the t-distribution using the unpooled degrees of freedom.  The confidence interval is then  You must decide what C you will use before the study or else it is meaningless.  For both hypothesis tests and confidence intervals, the key is to use the correct standard error and degrees of freedom for the problem (what is being estimated and how the data are obtained).

27 Coffee Shop Customers: West Coast vs. East Coast Side-by-side boxplots help us compare the two samples visually. The West Coast values are generally lower and have slightly more spread than the East Coast values. The marketing firm obtained two independent random samples: 34 “West Coast” coffee shops and 29 “East Coast” coffee shops. For each shop, the firm observed the average number of customers (per day) over 2 week periods. Here μ WC is the mean, for all West Coast coffee shops, of the variable X WC = “daily average number of customers”. Likewise, μ EC is the corresponding mean for all East Coast coffee shops.

28 Coffee Shop Customers (cont.)  Is there a difference in the daily average number of customers between West Coast shops and East Coast shops?  Test the hypotheses H 0 : μ WC = μ EC vs. H a : μ WC  μ EC.  We will use significance level α = 0.01.  From StatCrunch, the P-value = 0.0028 < 0.01, so H 0 is rejected.

29 Coffee Shop Customers (cont.)  Find the 98% confidence interval for μ WC − μ EC.  The confidence interval can be used to conduct a two-sided test with significance level α = 1 − C.  Since the confidence interval does not contain 0, we can reject the null hypothesis that μ WC = μ EC.  Using this method to conduct a test, however, does not provide a P- value. Knowing the P-value is important so that you know the strength of your evidence, and not just whether it rejects H 0.  It is possible to modify this method in order to conduct a one-sided test instead. (Use C = 1 − 2α and reject H 0 only if the data agree with H a.)

30 Pooled two-sample procedures There are two versions of the two-sample t-test: one assuming equal variances (“pooled 2-sample test”) and one not assuming equal variances (“unpooled 2-sample test”) for the two populations. They have slightly different formulas and degrees of freedom. Two normally distributed populations with unequal variances The pooled (equal variance) two- sample t-test is mathematically exact. However, the assumption of equal variance is hard to check, and thus the unpooled (unequal variance) t- test is safer. In fact, the two tests give very similar results when the sample variances are not very different.

31 When both population have the same standard deviation σ, the pooled estimator of σ 2 is: s p replaces s 1 and s 2 in the standard error computation. The sampling distribution for the t-statistic is the t distribution with (n 1 + n 2 − 2) degrees of freedom. A level C confidence interval for µ 1 − µ 2 is (with area C between −t* and t*) To test the hypothesis H 0 : µ 1 = µ 2 against a one-sided or a two-sided alternative, compute the pooled two-sample t statistic for the t(n 1 + n 2 − 2) distribution.

32 Which type of test? One sample, paired samples or two independent samples?  Comparing vitamin content of bread immediately after baking vs. 3 days later (the same loaves are used on day one and 3 days later).  Comparing vitamin content of bread immediately after baking vs. 3 days later (tests made on independent loaves).  Average fuel efficiency for 2005 vehicles is 21 miles per gallon. Is average fuel efficiency higher in the new generation “green vehicles”?  Is blood pressure altered by use of an oral contraceptive? Comparing a group of women not using an oral contraceptive with a group taking it.  Review insurance records for dollar amount paid after fire damage in houses equipped with a fire extinguisher vs. houses without one. Was there a difference in the average dollar amount paid?

33 Cautions about the two sample t-test or interval  Using the correct standard error and degrees of freedom is critical.  As in the one sample t-test, the method assumes simple random samples.  Likewise, it also assumes the populations have normal distributions.  Skewness and outliers can make the methods inaccurate (that is, having confidence/significance level other that what they are supposed to have).  The larger the sample sizes, the less this is a problem.  It also is less of a problem if the populations have similar skewness and the two samples are close to the same size.  “Significant effect” merely means we have sufficient evidence to say the two true means are different. It does not explain why they are different or how meaningful/important the difference is.  A confidence interval is needed to determine how big the effect is.

34 To see how skewness affects statistical inference, we can do some simulations.  We use data from the “exponential” distribution, which is highly skewed. In StatCrunch: [Data-Simulate-Exponential, enter mean value = 1] Hazards with skewness

35 Hazards with skewness, cont.  We now simulate 1000 samples of size n = 25 and compute the t- statistic for each sample. In StatCrunch: [Data-Simulate-Exponential, enter 25 rows, 1000 columns, mean value = 1, and statistic sqrt(25)*(mean(Exponential)-1)/std(Exponential)] With df = 24, we find t* = 2.064 for C = 95% and t* = 2.492 for C = 98%. But the corresponding percentiles of the actual sampling distribution are wildly different. Only 93.5% of CI’s computed with C = 95% will contain the true mean.

36


Download ppt "Objectives 7.1, 7.2Inference for comparing means of two populations  Matched pairs t confidence interval  Matched pairs t hypothesis test  Two-sample."

Similar presentations


Ads by Google