Presentation is loading. Please wait.

Presentation is loading. Please wait.

Comparing Two Samples Harry R. Erwin, PhD

Similar presentations


Presentation on theme: "Comparing Two Samples Harry R. Erwin, PhD"— Presentation transcript:

1 Comparing Two Samples Harry R. Erwin, PhD
School of Computing and Technology University of Sunderland

2 Resources Crawley, MJ (2005) Statistics: An Introduction Using R. Wiley. Gentle, JE (2002) Elements of Computational Statistics. Springer. Gonick, L., and Woollcott Smith (1993) A Cartoon Guide to Statistics. HarperResource (for fun). Freund and Wilson (1998) Regression Analysis. Academic Press.

3 Why Test? Statistics is an experimental science, not really a branch of mathematics. It’s a tool that can tell you whether data are accidentally or really similar. It does not give you certainty. This lecture discusses comparison of samples.

4 Don't Complicate Things
Use the classical tests: var.test to compare two variances (Fisher's F) t.test to compare two means (Student's t) wilcox.test to compare two means with non-normal errors (Wilcoxon's rank test) prop.test (binomial test) to compare two proportions cor.test (Pearson's or Spearman's rank correlation) to correlate two variables chisq.test (chi-square test) or fisher.test (Fisher's exact test) to test for independence in contingency tables

5 Comparing Two Variances
Before comparing means, verify that the variances are not significantly different. var.text(set1, set2) This performs Fisher's F test If the variances are significantly different, you can transform the output (y) variable to equalise variances, or you can still use the t.test (Welch's modified test).

6 Comparing Two Means Student's t-test (t.test) assumes the samples are independent, the variances constant, and the errors normally distributed. It will use the Welch-Satterthwaite approximation (default, less power) if the variances are different. This test can also be used for paired data. Wilcoxon rank sum test (wilcox.test) is used for independent samples, errors not normally distributed. If you do a transform to get constant variance, you will probably have to use this test.

7 Paired Observations The measurements will not be independent.
Use the t.test with paired=T. Now you’re doing a single sample test of the differences against 0. When you can do a paired t.test, you should always do the paired test. It’s more powerful. Deals with blocking, spatial correlation, and temporal correlation.

8 Sign Test Used when you can't measure a difference but can see it.
Use the binomial test (binom.test) for this. Binomial tests can also be used to compare proportions. prop.test

9 Chi-square Contingency Tables
Deals with count data. Suppose there are two characteristics (hair colour and eye colour). The null hypothesis is that they are uncorrelated. Create a matrix that contains the data and apply chisq.test(matrix). This will give you a p-value for matrix values given the assumption of independence.

10 Fisher's Exact Test Used for analysis of contingency tables when one or more of the expected frequencies is less than 5. Use fisher.test(x)

11 Correlation and Covariance
Are two parameters correlated significantly? Create and attach the data.frame Apply cor(data.frame) To determine the significance of a correlation, apply cor.test(data.frame) You have three options: Kendall's tau (method = "k"), Spearman's rank (method = "s"), or (default) Pearson's product-moment correlation (method = "p")

12 Kolmogorov-Smirnov Test
Are two sample distributions significantly different? or Does a sample distribution arise from a specific distribution? ks.test(A,B)

13 Statistical Problems Outliers Unequal variances Correlated errors

14 Outliers and Influential Observations
Extreme responses are called outliers and extreme inputs are called leverage points. An observation that has great influence on the estimates is usually an outlier and a leverage point. Use the residual plot to detect them—discussed in the modelling presentation. Fix by verifying the correctness of the observation. If it happens to be correct, it may reflect a factor not present in any of the other observations.

15 Unequal variances Mentioned earlier. Use
non-parametric statistics (usually not effective for regression) robust methods rescaling live with it

16 Correlated errors The measurements in the data were not independent—usually because selection of the sample units was not strictly random. Frequent problem with time series data but can also reflect spatial correlation or simply sloppy data collection. An autoregressive model. Try special models Avoid with a careful experimental design.


Download ppt "Comparing Two Samples Harry R. Erwin, PhD"

Similar presentations


Ads by Google