Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nonparametric Methods and Chi-Square Tests Session 5.

Similar presentations


Presentation on theme: "Nonparametric Methods and Chi-Square Tests Session 5."— Presentation transcript:

1 Nonparametric Methods and Chi-Square Tests Session 5

2 Using Statistics. The Sign Test. The Runs Test - A Test for Randomness. The Mann-Whitney U Test. The Wilcoxon Signed-Rank Test. Nonparametric Methods and Chi-Square Tests (1)

3 The Kruskal-Wallis Test - A Nonparametric Alternative to One-Way ANOVA. The Friedman Test for a Randomized Block Design. The Spearman Rank Correlation Coefficient. A Chi-Square Test for Goodness of Fit. Contingency Table Analysis - A Chi-Square Test for Independence. A Chi-Square Test for Equality of Proportions. Using the Computer. Summary and Review of Terms. Nonparametric Methods and Chi-Square Tests (2)

4 Parametric Methods Inferences based on assumptions about the nature of the population distribution. Usually: population is normal. Types of tests t-test » Comparing two population means or proportions. » Testing value of population mean or proportion. ANOVA » Testing equality of several population means. 5-1 Using Statistics (Parametric Tests)

5 Nonparametric Tests Distribution-free methods making no assumptions about the population distribution. Types of tests Sign tests » Sign Test: Comparing paired observations. » McNemar Test: Comparing qualitative variables. » Cox and Stuart Test: Detecting trend. Runs tests » Runs Test: Detecting randomness. » Wald-Wolfowitz Test: Comparing two distributions. Nonparametric Tests (1)

6 Nonparametric Tests – Ranks tests Mann-Whitney U Test: Comparing two populations. Wilcoxon Signed-Rank Test: Paired comparisons. Comparing several populations: ANOVA with ranks.  Kruskal-Wallis Test  Friedman Test: Repeated measures – Spearman Rank Correlation Coefficient. – Chi-Square Tests Goodness of Fit. Testing for independence: Contingency Table Analysis. Equality of Proportions. Nonparametric Tests (2)

7 Deal with enumerative (frequency counts) data. Do not deal with specific population parameters, such as the mean or standard deviation. Do not require assumptions about specific population distributions (in particular, the normality assumption). Nonparametric Tests (3)

8 Comparing paired observations Paired observations: X and Y p = P(X>Y) Two-tailed test H 0 : p = 0.50 H 1 : p  0.50 Right-tailed testH 0 : p  0.50 H 1 : p  0.50 Left-tailed testH 0 : p  0.50 H 1 : p  0.50 Test statistic: T = Number of + signs » Large sample 5-2 Sign Test

9 Small Sample: Binomial Test For a two-tailed test, find a critical point corresponding as closely as possible to  /2 (C 1 ) and define C 2 as n-C 1. Reject null hypothesis if T  C 1 or T  C 2. For a right-tailed test, reject H 0 if T  C, where C is the value of the binomial distribution with parameters n and p = 0.50 such that the sum of the probabilities of all values less than or equal to C is as close as possible to the chosen level of significance, . For a left-tailed test, reject H 0 if T  C, where C is defined as above. Sign Test Decision Rule

10 Cumulative Binomial Probabilities (n=15, p=0.5) x F(x) 00.00003 10.00049 20.00369 30.01758 40.05923 50.15088 60.30362 70.50000 80.69638 90.84912 100.94077 110.98242 120.99631 130.99951 140.99997 151.00000 CEO Before After Sign 1 34 1+ 2 55 0 3 2 3 1+ 4 24 1+ 5 44 0 6 23 1+ 7 12 1+ 8 54 -1- 9 45 1+ 10 54 -1- 11 34 1+ 12 25 1+ 13 25 1+ 14 23 1+ 15 1 2 1+ 16 32 -1- 17 45 1+ n = 15 T = 12  0.025 C1=3 C2 = 15-3 = 12 H 0 rejected, since T  C2 C1 Example 5-1

11 A run is a sequence of like elements that are preceded and followed by different elements or no element at all. Case 1 : S|E|S|E|S|E|S|E|S|E|S|E|S|E|S|E|S|E|S|E : R = 20 Apparently nonrandom Case 2: SSSSSSSSSS|EEEEEEEEEE : R = 2 Apparently nonrandom Case 3: S|EE|SS|EEE|S|E|SS|E|S|EE|SSS|E : R = 12 Perhaps random A two-tailed hypothesis test for randomness: H 0 : Observations are generated randomly H 1 : Observations are not generated randomly Test Statistic: R=Number of Runs Reject H 0 at level  if R  C1 or R  C2, as given in Table 8, with total tail probability P(R  C 1 ) + P(R  C 2 ) =  5-3 The Runs Test - A Test for Randomness

12 Table 8: Number of Runs (r) (n 1,n 2 )11121314151617181920. (10,10) 0.5860.7580.8720.9490.9810.9960.9991.0001.0001.000 Case 1: n 1 = 10 n 2 = 10 R= 20 p-value  0 Case 2: n 1 = 10 n 2 = 10 R = 2 p-value  0 Case 3: n 1 = 10 n 2 = 10 R= 12 p-value  P  R  F(11)] = (2)(1-0.586) = (2)(0.414) = 0.828 H 0 not rejected Runs Test: Examples

13 Large-Sample Runs Test: Using the Normal Approximation

14 Example 14-2: n 1 = 27 n 2 = 26 R = 16 H 0 should be rejected at any common level of significance. Large-Sample Runs Test: Example 5-2

15 The null and alternative hypotheses for the Wald-Wolfowitz test: H 0 : The two populations have the same distribution. H 1 : The two populations have different distributions. The test statistic: R = Number of Runs in the sequence of samples, when the data from both samples have been sorted. Salesperson A:35443950482960754966 Salesperson B:172313243321181632 Using the Runs Test to Compare Two Population Distributions (Means): the Wald-Wolfowitz Test

16 Table Number of Runs (r) (n 1,n 2 )2345. (9,10) 0.0000.0000.0020.004... Sales SalesSalesPerson SalesPerson(Sorted)(Sorted)Runs 35A13 B 44A16 B 39A17 B 48A21 B 60A24 B 1 75A29 A 2 49A32 B 66A33 B 3 17B35 A 23B39 A 13B44 A 24B48 A 33B49 A 21B50 A 18B60 A 16B66 A 32B75 A 4 n 1 = 10 n 2 = 9 R= 4 p-value  P  R  H 0 may be rejected The Wald-Wolfowitz Test: Example 5-3

17 Ranks tests –Mann-Whitney U Test: Comparing two populations. –Wilcoxon Signed-Rank Test: Paired comparisons. –Comparing several populations: ANOVA with ranks. Kruskal-Wallis Test. Friedman Test: Repeated measures. Ranks Tests

18 The null and alternative hypotheses: H 0 : The distributions of two populations are identical H 1 : The two population distributions are not identical The Mann-Whitney U statistic: where n 1 is the sample size from population 1 and n 2 is the sample size from population 2. 5-4 The Mann-Whitney U Test (Comparing Two Populations)

19 Cumulative Distribution Function of the Mann-Whitney U Statistic n 2 =6 n 1 =6 u. 40.0130 50.0206 60.0325. Rank ModelTimeRankSum A35 5 A38 8 A4010 A4212 A4111 A36 652 B29 2 B27 1 B30 3 B33 4 B39 9 B37 726 P(u  5) The Mann-Whitney U Test: Example 5-4

20 Example 5-5: Large-Sample Mann-Whitney U Test Score Rank Score Program Rank Sum 85120.020.0 87121.041.0 92127.068.0 98130.098.0 90126.0124.0 88123.0147.0 75117.0164.0 72113.5177.5 6016.5184.0 93128.0212.0 88123.0235.0 89125.0260.0 96129.0289.0 73115.0304.0 6218.5312.5 Score Rank Score Program Rank Sum 65210.0 10.0 5724.0 14.0 74216.0 30.0 4322.0 32.0 3921.0 33.0 88223.0 56.0 6228.5 64.5 69211.0 75.5 70212.0 87.5 72213.5101.0 5925.0106.0 6026.5112.5 80218.0130.5 83219.0149.5 5023.0152.5 Since the test statistic is z = -3.32, the p-value  0.0005, and H 0 is rejected.

21 The null and alternative hypotheses: H 0 : The median difference between populations are 1 and 2 is zero. H 1 : The median difference between populations are 1 and 2 is not zero. Find the difference between the ranks for each pair, D = x 1 -x 2, and then rank the absolute values of the differences. The Wilcoxon T statistic is the smaller of the sums of the positive ranks and the sum of the negative ranks: For small samples, a left-tailed test is used, using the values in Appendix C, Table 10. The large-sample test statistic: 5-5 The Wilcoxon Signed- Ranks Test (Paired Ranks)

22 Sold Sold Rank Rank Rank (1) (2) D=x 1 -x 2 ABS(D) ABS(D)(D>0) (D<0) 564016169.09.00 4870-222212.00.012 10060404015.015.00 857015158.08.00 22814147.07.00 4440442.02.00 3545-10106.00.06 287212111.011.00 5260-885.00.05 7770773.53.50 8990-111.00.01 10100**** 6585-202010.00.010 9061292913.013.00 7040303014.014.00 3326773.53.50 Sum:8634 T=34 n=15 P=0.0530 P=0.02525 P=0.0120 P=0.00516 H 0 is not rejected (Note the arithmetic error in the text for store 13) Example 5-6

23 The spreadsheet implements a Wilcoxon sign rank test. The RANK function was used in column F (Rank Diff), however, the resulting values required adjustment due to the presence of tie in the rankng for Scores 10 and 16 (which Excel handles differently than in the Wilcoxon sign ranked procedure. Example 5-6 using Excel

24

25 Hourly Rank Rank Rank Messages Md 0 D=x 1 -x 2 ABS(D) ABS(D) (D>0) (D<0) 151149221.01.00.0 144149-552.00.02.0 123149-262613.00.013.0 178149292915.015.00.0 105149-444423.00.023.0 112149-373720.00.020.0 140149-994.00.04.0 167149181810.010.00.0 177149282814.014.00.0 185149363619.019.00.0 129149-202011.00.011.0 16014911116.06.00.0 110149-393921.00.021.0 170149212112.012.00.0 198149494925.025.00.0 16514916168.08.00.0 109149-404022.00.022.0 118149-313116.50.016.5 155149663.03.00.0 102149-474724.00.024.0 16414915157.07.00.0 180149313116.516.50.0 139149-10105.00.05.0 16614917179.09.00.0 82149333318.018.00.0 Sum:163.5161.5 Example 5-7

26 The Kruskal-Wallis hypothesis test: H 0 : All k populations have the same distribution. H 1 : Not all k populations have the same distribution. The Kruskal-Wallis test statistic: If each n j > 5, then H is approximately distributed as a  2. 5-6 The Kruskal-Wallis Test - A Nonparametric Alternative to One-Way ANOVA

27 SoftwareTimeRank Group RankSum 14514190 13810256 15616325 16017 14715 16518 2308 24011 2287 24413 2255 24212 3224 3193 3151 3319 3276 3172  2 (2,0.005) =10.5966, so H 0 is rejected. Example 5-8: The Kruskal- Wallis Test

28 If the null hypothesis in the Kruskal-Wallis test is rejected, then we may wish, in addition, compare each pair of populations to determine which are different and which are the same. Further Analysis (Pairwise Comparisons of Average Ranks)

29 Pairwise Comparisons: Example 5-8

30 A manager wants to explore upgrading the fleet of trucks. There are three new models to choose from. The manager is allowed to drive the trucks for a few days, and randomly picks 15 drivers to do so. Five drivers will test each truck. Conduct a Kruskal-Wallis rank test for differences in the three population medians for the MPGs. Pairwise Comparisons: Example 5-9 Using Excel

31

32 The Spearman Rank Correlation Coefficient is the simple correlation coefficient calculated from variables converted to ranks from their original values. 5-7 The Spearman Rank Correlation Coefficient

33 Table 11:  =0.005 n. 7------ 80.881 90.833 100.794 110.818. r s =1- (6)(4) (10)(10 2 -1) =1- 24 990 =0.9758>0.794     1 6 2 1 2 1 d i i n nn() H 0 rejected MMIS&P100R-MMIR-S&PDiffDiffsq 220151 7 611 218150 5 500 216148 3 300 217149 4 400 215147 2 200 213146 1 100 219 152 6 7-11 236165 910-11 23716210 911 235161 8 800 Sum:4 Spearman Rank Correlation Coefficient: Example 5-10

34 Spearman Rank Correlation Coefficient: Example 5-10 Using Excel

35 l Steps in a chi-square analysis: Formulate null and alternative hypotheses Compute frequencies of occurrence that would be expected if the null hypothesis were true - expected cell counts Note actual, observed cell counts Use differences between expected and actual cell counts to find chi-square statistic: Compare chi-statistic with critical values from the chi-square distribution (with k-1 degrees of freedom) to test the null hypothesis 5-8 A Chi-Square Test for Goodness of Fit

36 The null and alternative hypotheses: H 0 : The probabilities of occurrence of events E 1, E 2...,E k are given by p 1,p 2,...,p k H 1 : The probabilities of the k events are not as specified in the null hypothesis Exit 14-11: Assuming equal probabilities, p 1 = p 2 = p 3 = p 4 =0.25 and n=80 PreferenceTanBrownMaroonBlackTotal Observed1240 82080 Expected(np)2020 202080 (O-E)-820-12 00 Goodness-of-Fit Test for the Multinomial Distribution

37 50-5 0.4 0.3 0.2 0.1 0.0 z f ( z ) Partitioning the Standard Normal Distribution 1 -0.440.44 0.1700 0.1713 0.1587 0.1700 0.1713 1. Use the table of the standard normal distribution to determine an appropriate partition of the standard normal distribution which gives ranges with approximately equal percentages. p(z<-1) = 0.1587 p(-1<z<-0.44)= 0.1713 p(-0.44<z<0)= 0.1700 p(0<z<0.44)= 0.1700 p(0.44<z<14)= 0.1713 p(z>1) = 0.1587 2. Given z boundaries, x boundaries can be determined from the inverse standard normal transformation: x =  +  z = 125 + 40z. 3. Compare with the critical value of the  2 distribution with k-3 degrees of freedom. Goodness-of-Fit for the Normal Distribution: Example 5-11

38 iO i E i O i - E i (O i - E i ) 2 (O i - E i ) 2 / E i 11415.87-1.873.496900.22035 22017.132.878.236910.48085 31617.00-1.001.000000.05882 41917.002.004.000000.23529 51617.13-1.131.276900.07454 61515.87-0.870.756900.04769  2 :1.11755  2 (0.10,k-3) = 6.5139 > 1.11755  H 0 is not rejected at the 0.10 level Example 5-12: Solution

39 In lieu of all the recent mergers, companies have looked to employees for help in determining the new company name. When two prominent banks joined forces, 250 employees were chosen at random to evaluate (like or dislike) two names. 140 workers commented on Name A, of whom 85 liked the name. Out of the 110 workers who commented on Name B, 54 liked the name. Conduct a chi-square test. Example 5-13

40 Example 5-13: Solution

41 5-9 Contingency Table Analysis: A Chi-Square Test for Independence

42 Null and alternative hypotheses: H 0 : The two classification variables are independent of each other H 1 : The two classification variables are not independent Chi-square test statistic for independence: Degrees of freedom: df=(r-1)(c-1) Expected cell count: A and B are independent if: P(AUB) = P(A)P(B). If the first and second classification categories are independent:E ij = (R i )(C j )/n Contingency Table Analysis: A Chi- Square Test for Independence

43 ijOEO-E(O-E) 2 (O-E) 2 /E 114228.813.2174.246.0500 121831.2-13.2174.245.5846 21619.2-13.2174.249.0750 223420.813.2174.248.3769  2 : 29.0865  2 (0.01,(2-1)(2-1)) =6.63490 H 0 is rejected at the 0.01 level and it is concluded that the two variables are not independent. Contingency Table Analysis: Example 5-14

44 MTB > Unstack (C3) (c4) (c5) (c6) (c7); SUBC> Subscripts C2. MTB > chisquare c4-c7 Chi-Square Test Expected counts are printed below observed counts C4 C5 C6 C7 Total 1 22 21 34 56 133 27.43 30.81 34.19 40.56 2 39 45 42 68 194 40.02 44.95 49.88 59.16 3 77 89 96 80 342 70.55 79.24 87.93 104.29 Total 138 155 172 204 669 ChiSq = 1.077 + 3.126 + 0.001 + 5.881 + 0.026 + 0.000 + 1.244 + 1.322 + 0.590 + 1.203 + 0.741 + 5.656 = 20.867 df = 6, p = 0.002 Given the p-value of 0.002, the null hypothesis of independence can be rejected at any common level of significance. Using the Computer: Example 5-15

45 MTB > ChiSquare C1 C2 C3. Chi-Square Test. Expected counts are printed below observed counts. C1 C2 C3 Total 1 40 35 60 135 45.00 45.00 45.00 2 60 65 40 165 55.00 55.00 55.00 Total 100 100 100 300 ChiSq = 0.556 + 2.222 + 5.000 + 0.455 + 1.818 + 4.091 = 14.141 df = 2, p = 0.001 5-10 Chi-Square Test for Equality of Proportions

46 MTB > median c5 k1 Column Median Median of C5 = 31.500 MTB > let c6=c5-k1 MTB > let c7=sign(c6) MTB > Table C4 C7; SUBC> Counts; SUBC> ChiSquare 2. Tabulated Statistics ROWS: C4 COLUMNS: C7 -1 1 ALL 1 4 6 10 5.00 5.00 10.00 2 5 5 10 5.00 5.00 10.00 3 6 4 10 5.00 5.00 10.00 ALL 15 15 30 15.00 15.00 30.00 CHI-SQUARE = 0.800 WITH D.F. = 2 Chi-Square Test for the Median: Example 5-16

47 Figure 5-12: MTB > RUNS ABOVE AND BELOW 30, C1 Runs Test C1 K = 30.0000 The observed no. of runs = 18 The expected no. of runs = 26.8462 24 Observations above K 28 below The test is significant at 0.0128 Figure 5-13: MTB > mann-whitney (alternative=1) c1 c2 Mann-Whitney Confidence Interval and Test C1 N = 10 Median = 37.50 C2 N = 10 Median = 27.00 Point estimate for ETA1-ETA2 is 11.00 95.5 Percent C.I. for ETA1-ETA2 is (7.00,18.00) W = 145.5 Test of ETA1 = ETA2 vs. ETA1 > ETA2 is significant at 0.0012 The test is significant at 0.0012 (adjusted for ties) 5-11 Using the Computer: Mann-Whitney Test

48 MTB > let c3=c1-c2 MTB > wtest c3 Wilcoxon Signed Rank Test TEST OF MEDIAN = 0.000000 VERSUS MEDIAN N.E. 0.000000 N FOR WILCOXON ESTIMATED N TEST STATISTIC P-VALUE MEDIAN C3 11 11 12.5 0.075 -14.75 C1C2C3 143165-22 124170-46 179231-52 16615412 133200-67 167169-2 134149-15 104111-7 190202-12 1219823 144152-8 Table 5-14 Using the Computer: Wilcoxon Signed-Rank Test

49 MTB > Kruskal-Wallis C1 C2. Kruskal-Wallis Test LEVEL NOBS MEDIAN AVE. RANK Z VALUE 1 9 15.00 9.9 -0.71 2 7 18.00 17.5 3.39 3 5 12.00 3.9 -2.93 OVERALL 21 11.0 H = 14.52 d.f. = 2 p = 0.001 H = 14.72 d.f. = 2 p = 0.001 (adjusted for ties) C1C2 121 141 151 121 151 171 151 171 161 182 162 172 192 182 212 202 113 123 103 123 143 Table 5-15 Using the Computer: Kruskal-Wallis Test

50 MTB > print c1 c2 Row C1 C2 1 86 54 2 89 59 3 97 66 4 54 20 5 66 70 6 49 57 7 40 81 8 69 90 9 22 60 10 39 95 MTB > rank c1 c3 MTB > rank c2 c4 MTB > correlation c3 c4 Correlations (Pearson) Correlation of C3 and C4 = -0.236 Table 5-16 Using the Computer: Rank Correlation

51 c1c2c3 314 320 330 4110 422 439 516 5213 535 MTB > Table 'c1' 'c2'; SUBC> Frequencies 'c3'; SUBC> ChiSquare 2. Tabulated Statistics ROWS: c1 COLUMNS: c2 1 2 3 Total 3 4 0 0 4 1.63 1.22 1.14 4.00 4 10 2 9 21 8.57 6.43 6.00 21.00 5 6 13 5 24 9.80 7.35 6.86 24.00 Total 20 15 14 49 20.00 15.00 14.00 49.00 CHI-SQUARE = 16.913 WITH D.F. = 4 Table 5-17 Using the Computer: Chi-Square Test


Download ppt "Nonparametric Methods and Chi-Square Tests Session 5."

Similar presentations


Ads by Google