# Parametric Inferential Statistics. Types of Inference Estimation: On the basis of information in a sample of scores, we estimate the value of a population.

## Presentation on theme: "Parametric Inferential Statistics. Types of Inference Estimation: On the basis of information in a sample of scores, we estimate the value of a population."— Presentation transcript:

Parametric Inferential Statistics

Types of Inference Estimation: On the basis of information in a sample of scores, we estimate the value of a population parameter. Hypothesis Testing: We determine how well the data in the sample fit with the hypothesis that in the population a parameter has a particular value.

Sampling Distribution The probability distribution of a statistic. Imagine this: –Draw an uncountably large number of samples, each with N scores. –For each sample, calculate a statistic (for example, the mean). –The distribution of these statistics is the sampling distribution

Desirable Properties of Estimators CURSE –Consistency –Unbiasedness –Resistance –Sufficiency –Efficiency

Unbiasedness Unbiasedness: The expected value (mean) of the statistic is exactly equal to the value of the parameter being estimated. The sample mean is an unbiased estimator of the population mean. The sample variance is an unbiased estimator of the population variance

The sample standard deviation is not an absolutely unbiased estimator of the population standard deviation. Consider this sampling distribution of variances: Sample VarianceProbabilityE(s 2 )= Ʃ P i s 2 2.51 4 2 Population Variance =3 The population standard deviation must be SQRT(3) = 1.732.

For these same samples, the sampling distribution of the standard deviations is: Oops, the expected value (1.707) is not equal to the value of the population parameter (1.732). Sample SDProbabilityE(s)= Ʃ P i s SQRT(2) = 1.414.5.707 SQRT(4) = 2.51 Population Variance =1.707

Relative Efficiency The standard deviation of a sampling distribution is called its standard error. The smaller the standard error, the less error one is likely to make when using the statistic to estimate a parameter. Statistics with relatively low standard errors are known as efficient statistics.

We Play a Little Game You are allowed to drawn a sample of scores from a population whose mean Professor Karl knows. You then get to estimate that mean from the sample data. If your estimate is within one point of the true value of the parameter, you get an A in the course; otherwise you fail.

There are three different estimators available, X, Y, and Z. Each is absolutely unbiased. They differ greatly in dispersion. Which one will you use? Let’s look at their sampling distributions

Estimator X: SEM = 5 You have a 16% chance of getting that A.

Estimator Y: SEM = 1 You have a 68% chance of earning that A.

Estimator Z: SEM = 0.5 You have a 95% chance of earning that A.

Consistency With a consistent estimator, the standard error decreases as the sample size increases. For example, for the standard error of the mean:

Sufficiency A sufficient estimator uses all of the information in the sample. Consider the range. It uses information from the lowest score and the highest score. Consider the variance. It uses information from every score.

Resistance This is with respect to resistance to the effects of outliers. We have already seen that the median is much more resistant than is the mean.

Parameter Estimation There are two basic types –Point Estimation: We come up with a single value which is our best bet regarding what the value of the parameter is. –Interval Estimation: We come up with an interval of values which we are confident contains the true value of the parameter.

Confidence Coefficient CC is the subjective probability that the interval will include the true value of the parameter. Suppose CC =.95. Were we to construct an infinite number of confidence intervals, 95% of them would include the true value. α is the subjective probability that the interval will not contain the true value.

If Sampling Distribution is Normal The confidence interval will be Where theta-hat is the estimator Sigma theta-hat is the standard error And z is the how far you need go out from the mean of the sampling distribution to encompass the middle CC proportion of the distribution.

Example At Bozo College, IQ is normally distributed with a standard deviation of 15. We want to estimate the mean. A sample of 1 has mean = 110. A 95% CI for the mean is 110  1.96(15) = [80.6, 139.4].

Hypothesis Testing The null hypothesis states that some parameter has a particular value. Example:  = 100 The alternative hypothesis states that the null hypothesis is not correct. Example:   100 These are nondirectional hypotheses – the alternative does not predict the direction of difference from 100.

Directional Hypotheses The alternative does predict a direction. Example –H  is   100 –H 1 is  > 100 Another example –H  is   100 –H 1 is  < 100

What the Null Usually Is It is usually that the value of the correlation between two variables or sets of variables is zero. That is, the variables are not related. The researcher usually thinks that the null is incorrect.

What the Null Rarely Is The prediction from a mathematical model being tested. For example, mean weight loss during this treatment is 23.67 pounds. In this case, if we find the null to be substantially incorrect, we need to reject or revise the mathematical model (aka “theory”).

Testing the Null Gather relevant data Determine how well the data fit with the null hypothesis, using a statistic called “p,” the level of significance. p is the probability of obtaining a sample as or more discrepant with the H  than is that which we did obtain, assuming that the H  is true.

The higher this p, the better the fit between the data and the H . If this p is low we have cast doubt upon the H . If p is very low, we reject the H . How low is very low? Very low is usually.05 -- the decision rule most often used in Psychology is Reject the null if p .05.

Simple Example H  : mean IQ of my extended family is 145. H 1 : no it isn’t Sample of one score: IQ = 110. Assume SD = 15 and population normal. z = (110 - 145) / 15 = ‑ 2.33 Is that z score unusual enough for us to reject the null hypothesis?

From the normal curve table, we determine that were the null true, we would get a z that far from zero only 1.98% of the time. That is, p =.0198. This is less than.05, so we reject the null hypothesis and conclude that the mean is not 145.

This Was a Two-Tailed Test

Do You Need a p Value? Journals act like it is not real science unless there is a p value, but all you really need is a CI. 95% CI = 110  1.96(15) = [80.6, 139.4] We are 95% confident that the mean is between 80.6 and 139.4. That excludes 145, so we at least 95% confident that the mean is not 145  Reject the null.

A More Traditional Approach Think about the sampling distribution of the test statistic (z). The nonrejection region is the area with values that would not lead to rejection of the null. The rejection region is the area with values that would lead to rejection of the null.

CV is the “critical value,” the boundary between rejection and nonrejection regions

Decision Rule If |z| > 1.96, then p <.05 and we reject the null. Otherwise, we retain the null. We never figure out exactly what the value of p is. I strongly prefer that you use the modern approach, where you find the exact value of p.

“Confusion Matrix” The True Hypothesis Is DecisionThe H 1 The H  Reject H  Assert H 1 correct decision (power) Type I error (  ) Retain H  Do not assert H 1 Type II error (  ) correct decision (1-  )

Signal Detection Is the Signal Really There? PredictionYesNo Signal is there True Positive [hit] (power) False Positive (  ) Signal is not there False Negative [miss] (  ) True Negative (1-  )

Relative Seriousness of Type I and II Errors Tumor rate in rats is 10%. Treat them with suspect drug. –H  : rate  10%; drug is safe –H 1 : rate > 10%; drug is not safe Type I Error: The drug is safe, but you conclude it is not. Type II Error: The drug is not safe, but you conclude it is.

Testing experimental blood pressure drug. –H  : Drop in BP  0; drug does not work –H 1 : Drop in BP > 0; drug does work Type I Error: The drug does not lower BP, but you conclude it does. Type II Error: The drug does lower BP, but you conclude it does not.

Directional Hypotheses The alternative hypothesis predicts the direction of the difference between the actual value of the parameter and the hypothesized value. The rejection region will be in only one tail – the side to which the alternative hypothesis points. H  :   145 versus H 1 :  > 145

z = (110 - 145) / 15 = ‑ 2.33 Our test statistic is not in the rejection region, we must retain the null. The p value will be P(z > -2.33), which is equal to.9901. The data fit very well with the null hypothesis that   145.

Change the Prediction H  :   145 versus H 1 :  < 145. The rejection region is now in the lower tail. If z  -1.645, we reject the null. Our z is still -2.33, we reject the null.

The p value is now P(z < -2.33), which is equal to.0099. We do not double the p value as we would with nondirectional hypotheses.

Pre- or Post-diction? If you correctly predicted (H 1 ) the direction, the p value will be half what it would have been with nondirectional hypotheses. That gives you more power. BUT others will suspect that you postdicted the direction of the difference.

Frequency of Type I Errors If we are in that universe where the null hypothesis is always true, using the.05 criterion of statistical significance, We should make Type I errors 5% of the time. There may be factors that inflate this percentage Failures to reject the null are not often published.

“Publish or Perish.” May produce unconscious bias against keeping the null, affecting data collection and analysis. May lead to fraud.

The File Drawer Problem 100 researchers test the same absolutely true null hypothesis. 5 get “significant” results. Joyfully, they publish, unaware that their conclusions are Type I errors. The other 95 just give up and put their “not significant” results in a folder in the filing cabinet.

Is an Exact Null Ever True? The null is usually that the correlation between two things is zero. Correlation coefficients are continuously distributed between -1 and +1. The probability of any exact value of a continuous variable (such as  = 0) is vanishingly small. But a null may be so close to true that it might as well be true.

Doggie Dance

Download ppt "Parametric Inferential Statistics. Types of Inference Estimation: On the basis of information in a sample of scores, we estimate the value of a population."

Similar presentations