Presentation is loading. Please wait.

Presentation is loading. Please wait.

Laws of Logic and Rules of Evidence Larry Knop Hamilton College.

Similar presentations


Presentation on theme: "Laws of Logic and Rules of Evidence Larry Knop Hamilton College."— Presentation transcript:

1 Laws of Logic and Rules of Evidence Larry Knop Hamilton College

2 Transitivity of Order For any real numbers a, b, c, if a < b and b < c then a < c. Transitivity is an implication. We must know a < b and b < c in order to conclude b < c. In mathematics: No problem. We know numbers. In life: There’s a problem. 

3 In life we don’t know anything for certain. To find the value of a number a, we must take measurements and determine a as best we can from the evidence. Transitivity in the real world: Suppose the evidence shows a < b and the evidence shows b < c. Can we conclude, based on the evidence, that a < c?

4 Background Let  F be the mean height of HC females. Let  M be the mean height of HC males. Conjecture:  F <  M. Evidence: Take a sample of HC females and measure each subject’s height. Data: f 1, f 2, …, f n. Then take a sample of HC males and measure each subject’s height. Data: m 1, m 2, …, m p.

5 To investigate our conjecture we will test H o :  F =  M (or  M –  F =  ) versus H a :  F  ) Test statistic: Burning question: How likely is it to get a value of as extreme as what we observed, or more so, if  F =  M ?

6 Goal: Calculate probabilities for. To do so, we need a probability distribution. Starters: Assume each f i is normally distributed with mean  F and standard deviation  F. Assume each m j is normally distributed with mean  M and standard deviation  M. Further, assume we were reasonably intelligent as to how we chose our subjects, so that the measurements are independent.

7 Under our assumptions the random variable has a standard normal distribution. Unfortunately there is a problem. We don’t know  F or  M. Solution? We can replace the population standard deviations  F and  M by the sample standard deviations s F and s M.

8 Under our assumptions the random variable has a #@!!&% distribution. The distribution of the random variable can be approximated by a t-distribution, if you don’t mind fractional degrees of freedom. Or, it can be approximated by a standard t-distribution with integer degrees of freedom if you don’t mind a less than optimal approximation. 

9 Alternatively, we can assume a common variance, so  M =  F =  C. Then has a standard normal distribution. The common s.d. can be approximated by a pooled average of the two sample s.d.

10 With the common variance assumption the random variable has a t-distribution with n + p – 2 df. All is sweetness and light – except The assumption of a common variance raises a conflict with transitivity.

11 Example 5.45915 2.52951 5.45659 1.99805 n = 4 = 3.86 s a = 1.86 5.21471... 3.89517 4.92814 n = 100 = 4.887 s b = 0.92 6.30382 7.98292 4.74109 4.81625 n = 4 = 5.96 s c = 1.53 The following are selected random samples generated by Minitab from a normal distribution with a common variance of 1.

12 Test H o : a = b T-Test of difference = 0 (vs <): T-Value = -2.10 P-Value = 0.019 DF = 102 Both use Pooled StDev = 0.9605 Reject H o. Evidence supports the claim a < b. Test H o : b = c T-Test of difference = 0 (vs <): T-Value = -2.23 P-Value = 0.014 DF = 102 Both use Pooled StDev = 0.9433 Reject H o. Evidence supports the claim b < c. Test H o : a = c T-Test of difference = 0 (vs <): T-Value = -1.75 P-Value = 0.066 DF = 6 Both use Pooled StDev = 1.7004 Do NOT reject H o. Evidence is not strong enough to reject a = c.

13 The breaking of transitivity comes from the pooling of the standard deviations. The standard deviation is a measure of how well we know the location of a quantity. If we know one quantity well (small s.d. and large n) then the common variance assumption carries that knowledge over to the difference – even though knowledge of the second part of the difference is much less precise. In the example we know b, the middle quantity, very well while our knowledge of a and of c is much less precise.

14 Comparing a and b: the estimated difference is = 1.026 with a pooled s.d. of 0.9605. For the given sample sizes, the difference is statistically significant. The comparison for b and c is similar. Comparing a and c: the estimated difference is = 2.10 with a pooled s.d. of 1.7004. Even though the estimated difference is larger, the pooled s.d. is also much larger and the sample sizes are both small. Consequently the difference is not significantly different from 0.

15 So much for real world transitivity. There is a logic to the rules of evidence, but the logic is not quite as simple as the logic of mathematics. It should be noted that ANOVA, the ANalysis Of VAriance, applies to the comparison of n quantities – and is based on the assumption of a common variance, which leads to some interesting outcomes.


Download ppt "Laws of Logic and Rules of Evidence Larry Knop Hamilton College."

Similar presentations


Ads by Google