Download presentation
Presentation is loading. Please wait.
Published byBuddy Bradley Modified over 9 years ago
1
Nonparametrics.Zip (a compressed version of nonparametrics) Tom Hettmansperger Department of Statistics, Penn State University References: 1.Higgins (2004) Intro to Modern Nonpar Stat 2.Hollander and Wolfe (1999) Nonpar Stat Methods 3.Arnold Notes 4.Johnson, Morrell, and Schick (1992) Two-Sample Nonparametric Estimation and Confidence Intervals Under Truncation, Biometrics, 48, 1043-1056. 5.Beers, Flynn, and Gebhardt (1990) Measures of Location and Scale for Velocities in Cluster of Galaxies- A Robust Approach. Astron J, 100, 32-46. 6.Website: http://www.stat.wmich.edu/slab/RGLM/
2
Robustness and a little philosophy Robustness: Insensitivity to assumptions a.Structural Robustness of Statistical Procedures b.Statistical Robustness of Statistical Procedures
3
Structural Robustness (Developed in 1970s) Influence: How does a statistical procedure respond to a single outlying observation as it moves farther from the center of the data. Want: Methods with bounded influence. Breakdown Point: What proportion of the data must be contaminated in order to move the procedure beyond any bound. Want: Methods with positive breakdown point. Beers et. al. is an excellent reference.
4
Statistical Robustness (Developed in 1960s) Hypothesis Tests: Level Robustness in which the significance level is not sensitive to model assumptions. Power Robustness in which the statistical power of a test to detect important alternative hypotheses is not sensitive to model assumptions. Estimators: Variance Robustness in which the variance (precision) of an estimator is not sensitive to model assumptions. Not sensitive to model assumptions means that the property remains good throughout a neighborhood of the assumed model
5
Examples 1. The sample mean is not structurally robust and is not variance robust. 2.The sample median is structurally robust and is variance robust. 3.The t-test is level robust (asymptotically) but is not structurally robust nor power robust. 4. The sign test is structurally and statistically robust. Caveat: it is not very powerful at the normal model. 5.Trimmed means are structurally and variance robust. 6.The sample variance is not robust, neither structurally nor variance. 7.The interquartile range is structurally robust.
6
Recall that the sample mean minimizes Replace the quadratic by x) which does not increase like a quadratic. Then minimize: The result is an M-Estimator which is structurally robust and variance robust. See Beers et. al.
7
The nonparametric tests described here are often called distribution free because their significance levels do not depend on the underlying model assumption. Hence, they are level robust. They are also power robust and structurally robust. The estimators that are associated with the tests are structurally and variance robust. Opinion: The importance of nonparametric methods resides in their robustness not the distribution free property (level robustness) of nonparametric tests.
8
Single Sample Methods
9
Robust Data Summaries Graphical Displays Inference: Confidence Intervals and Hypothesis Tests Location, Spread, Shape CI-Boxplots (notched boxplots) Histograms, dotplots, kernel density estimates.
10
Absolute Magnitude Planetary Nebulae Milky Way Abs Mag (n = 81) -5.140 -6.700 -6.970 -7.190 -7.273 -7.365 -7.509 -7.633 -7.741 -8.000 -8.079 -8.359 -8.478 -8.558 -8.662 -8.730 -8.759 -8.825 …
13
But don’t be too quick to “accept” normality:
16
Null Hyp: Pop distribution, F(x) is normal The Kolmogorov-Smirnov Statistic The Anderson-Darling Statistic
18
Anatomy of a 95% CI-Boxplot Box formed by quartiles and median IQR (interquartile range) Q3 – Q1 Whiskers extend from the end of the box to the farthest point within 1.5xIQR. For a normal benchmark distribution, IQR=1.348Stdev and 1.5xIQR=2Stdev. Outliers beyond the whiskers are more than 2.7 stdevs from the median. For a normal distribution this should happen about.7% of the time. Pseudo Stdev =.75xIQR
19
Estimation of scale or variation. Recall the sample standard deviation is not robust. From the boxplot we can find the interquartile range. Define a pseudo standard deviation by.75IQR. This is a consistent estimate of the population standard deviation in a normal population. It is robust. The MAD is even more robust. Define the MAD by Med|x – med(x)|. Further define another psueudo standard deviation by 1.5MAD. Again this is calibrated to a normal population.
20
No Outlier Break- down Stdev1.802.431 obs.75IQR1.82 25% 1.5MAD1.78 50% Suppose we enter 5.14 rather than -5.14 in the data set. The table shows the effect on the non robust stdev and the robust IQR and MAD.
21
The confidence interval and hypothesis test
27
Additional Remarks : The median is a robust measure of location. It is not affected by outliers. It is efficient when the population has heavier tails than a normal population. The sign test is also robust and insensitive to outliers. It is efficient when the tails are heavier than those of a normal population. Similarly for the confidence interval. In addition, the test and the confidence interval are distribution free and do not depend on the shape of the underlying population to determine critical values or confidence coefficients. They are only 64% efficient relative to the mean and t-test when the population is normal. If the population is symmetric then the Wilcoxon Signed Rank statistic can be used, and it is robust against outliers and 95% efficient relative to the t-test.
28
Two-Sample Methods
29
Two-Sample Comparisons 85% CI-Boxplots Mann-Whitney-Wilcoxon Rank Sum Statistic Estimate of difference in locations Test of difference in locations Confidence Interval for difference in locations Levene’s Rank Statistic for differences in scale or variance.
36
Why 85% Confidence Intervals? We have the following test of Rule: reject the null hyp if the 85% confidence intervals do not overlap. The significance level is close to 5% provided the ratio of sample sizes is less than 3.
37
Mann-Whitney-Wilcoxon Statistic: The sign statistic on the pairwise differences. Unlike the sign test (64% efficiency for normal population, the MWW test has 95.5% efficiency for a normal population. And it is robust against outliers in either sample.
39
Mann-Whitney Test and CI: App Mag, Abs Mag N Median App Mag (M-31) 360 14.540 Abs Mag (MW) 81 -10.557 Point estimate for d is 24.900 95.0 Percent CI for d is (24.530,25.256) W = 94140.0 Test of d=0 vs d not equal 0 is significant at 0.0000 What is W?
41
What about spread or scale differences between the two populations? Below we shift the MW observations to the right by 24.9 to line up with M-31. Variable StDev IQR PseudoStdev MW 1.804 2.420 1.815 M-31 1.195 1.489 1.117
42
Levene’s Rank Test Compute |Y – Med(Y)| and |X – Med(X)|, called absolute deviations. Apply MWW to the absolute deviations. (Rank the absolute deviations) The test rejects equal spreads in the two populations when difference in average ranks of the absolute deviations is too large. Idea: After we have centered the data, then if the null hypothesis of no difference in spreads is true, all permutations of the combined data are roughly equally likely. (Permutation Principle) So randomly select a large set of the permutations say B permutations. Assign the first n to the Y sample and the remaining m to the X sample and compute MMW on the absolute deviations. The approximate p-value is #MMW > original MMW divided by B.
43
Difference of rank mean abso devs 51.9793 So we easily reject the null hypothesis of no difference in spreads and conclude that the two populations have significantly different spreads.
44
k-Sample Methods One Sample Methods Two Sample Methods
45
Variable Mean StDev Median.75IQR Skew Kurtosis Messier 31 22.685 0.969 23.028 1.069 -0.67 -0.67 Messier 81 24.298 0.274 24.371 0.336 -0.49 -0.68 NGC 3379 26.139 0.267 26.230 0.317 -0.64 -0.48 NGC 4494 26.654 0.225 26.659 0.252 -0.36 -0.55 NGC 4382 26.905 0.201 26.974 0.208 -1.06 1.08 All one-sample and two-sample methods can be applied one at a time or two at a time. Plots, summaries, inferences. We begin k-sample methods by asking if the location differences between the NGC nebulae are statistically significant. We will briefly discuss issues of truncation.
48
Kruskal-Wallis Test on NGC sub N Median Ave Rank Z 1 45 26.23 29.6 -9.39 2 101 26.66 104.5 0.36 3 59 26.97 156.4 8.19 Overall 205 103.0 KW = 116.70 DF = 2 P = 0.000 This test can be followed by multiple comparisons. For example, if we assign a family error rate of.09, then we would conduct 3 MWW tests, each at a level of.03. (Bonferroni)
50
What to do about truncation. 1.See a statistician 2.Read the Johnson, Morrell, and Schick reference. and then see a statistician. Here is the problem: Suppose we want to estimate the difference in locations between two populations: F(x) and G(y) = F(y – d). But (with right truncation at a) the observations come from Suppose d > 0 and so we want to shift the X-sample to the right toward the truncation point. As we shift the Xs, some will pass the truncation point and will be eliminated from the data set. This changes the sample sizes and requires adjustment when computing the corresponding MWW to see if it is equal to its expectation. See the reference for details.
51
Computation of shift estimate with truncation dmnWE(W) 25.38859 5.104750.54366.0 28.38459 3.604533.54248.0 30.38359 2.104372.04218.5 32.38159 0.804224.54159.5 33.38159-0.204144.54159.5 33.18159-0.004161.54159.5 Comparison of NGC4382 and NGC 4494 Data multiplied by 100 and 2600 subtracted. Truncation point taken as 120. Point estimate for d is 25.30 W = 6595.5 m = 101 and n = 59
52
Robust regression fitting and correlation (association) Dataset (http://astrostatistics.psu.edu/datasets/HIP_star.html)http://astrostatistics.psu.edu/datasets/HIP_star.html We have extracted a sample of 50 from the subset of 2719 Hipparcos stars Vmag = Visual band magnitude. This is an inverted logarithmic measure of brightness Plx = Parallactic angle (mas = milliarcsseconds). 1000/Plx gives the distance in parsecs (pc) B-V = Color of star (mag) The HR diagram logL vs. B-V where (roughly) the log-luminosity in units of solar luminosity is constructed logL=(15 - Vmag - 5logPlx)/2.5. All logs are base10.
53
Row LogL BV 1 0.69233 0.593 2 1.75525 0.935 3 -0.30744 0.830 4 -0.17328 0.685 5 0.57038 0.529 6 -1.04471 1.297 7 0.51396 0.510 8 0.52149 0.607 9 -1.06306 1.288 10 0.41990 0.677 11 -0.76152 0.950 12 -1.10608 1.260 13 0.42593 0.651 14 -0.44066 0.909 15 -0.90039 1.569 16 -0.74118 1.065 17 -0.66820 1.049 18 -0.26810 0.884 19 0.56722 0.480 20 -0.93809 0.490 21 -0.38095 1.160 22 -0.19267 0.810 23 0.54619 0.498 24 0.20161 0.614 25 0.37348 0.538 26 -0.38556 0.879 27 -0.22978 0.723 28 0.57671 0.455 29 -1.00092 1.110 30 -0.00215 0.637 31 -0.95768 1.616 32 0.10378 0.606 33 -1.43872 1.365 34 1.23674 0.395 35 0.10866 0.630 36 -1.60621 * 37 0.06468 0.599 38 -0.18214 0.709 39 0.37988 0.561 40 1.23793 0.257 41 -0.16896 0.864 42 -0.59331 0.955 43 1.78028 1.010 44 -0.63099 1.100 45 0.61900 0.664 46 -0.28520 0.706 47 -0.71404 0.898 48 0.35061 0.616 49 0.55002 0.466 50 0.37922 0.548
55
The resistant line is robust and not affected by the outliers. It follows the bulk of the data much better than the non robust least squares regression line. There are various ways to find a robust or resistant line. The most typical is to use the ideas of M-estimation and minimize: where the (x) does not increase as fast as a quadratic. The strength of the relationship between variables is generally measured by correlation. Next we look briefly at non robust and robust measures of correlation or association.
56
Pearson product moment correlation is not robust. Spearman’s rank correlation coefficient is simply the Pearson coefficient with the data replaced by their ranks. Spearman’s coefficient measures association or the tendency of the two measurements to increase or decrease together. Pearson’s measures the degree to which the measurements cluster along a straight line.
57
For the example: Pearson r = -.673 Spearman r s = -.743 Significance tests: Pearson r: refer z to a standard normal distribution, where Spearman r s : refer z to a standard normal distribution, where
58
Kendall’s tau coefficient is defined as where P is the number of concordant pairs out of n(n-1)/2 total pairs. For example (1, 3) and (2, 7) are concordant since 2>1 and 7>3. Note that Kendall’s tau estimates the probability of concordance minus the probability of discordance in the bivariate population.
59
For the example: Kendalls Tau = -0.63095 Significance Test: refer z to a standard normal distribution where
60
What more can we do? 1.Multiple regression 2.Analysis of designed experiments (AOV) 3.Analysis of covariance 4.Multivariate analysis These analyses can be carried out using the website: http://www.stat.wmich.edu/slab/RGLM/
61
Professor Lundquist, in a seminar on compulsive thinkers, illustrates his brain stapling technique.
62
The End
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.