Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inferential Statistics

Similar presentations


Presentation on theme: "Inferential Statistics"— Presentation transcript:

1 Inferential Statistics
From Sample to Population: Understanding Inferential Statistics Inferential Statistics

2

3 Review What scale of measurement is used for each of the variables?
Type of M&M you were given “Yummy scale” Scores Number of green M&Ms

4 Create a bar graph showing the color frequencies for all “whole” M&Ms.

5 Research Questions: How closely does the contents of your bag match the typical fun size pack? Do you have a typical number of green M&Ms? Is your bag really unusual or pretty common? To know the answer we have to know the distribution of green M&Ms for many samples.

6 Sampling Distribution of the Mean
A theoretical value generated by taking samples over and over again from the same population.

7 How it works? 1. Choose a sample size (e.g., n = 20)
(1 fun size packet of M&Ms) 2. Take every possible sample of that size from the population 3. Calculate the mean of each sample and plot it on a frequency histogram.

8 Our class sample distribution
Number of Green M&Ms

9 Sampling Error How accurately does your sample mean represent the population? How accurately does our class sample mean represent the population? (Mars, Inc. says that in a fun-size packet we should find about 2.5 green M&Ms) Were we close?

10 Two determinants of Sampling Error:
1. Sample size: N 2. Variability: SD So, 1 fun size packet of M&Ms (or a small sample of people) is less likely to be representative of the population especially if there is lots of variability in the population.

11 Sampling Error and Confidence Intervals
When we report a 95% CI for a sample mean we are saying that within this range of values lies the “true population mean” and we are 95% confident about that range. The bigger the range, the larger the error.

12 Calculate for Green M & Ms.
Take the mean. Multiply the Standard error by 2 Add and subtract this number from the mean to get the 95% CI (Mars, Inc. says that in a fun-size packet we should find about 2.5 green M&Ms) Were we close? Does 2.5 fall in our range?

13 In-Class Confidence Interval Work

14 Testing a Hypothesis What if our question is…
Will there be a difference in the mean “yummy” rating for Peanut and Plain M&Ms?

15 Null Hypothesis Testing
Accepting vs. rejecting hypotheses

16 The Null Hypothesis Assumes that the IV had no effect at all; Any difference b/w the mean ratings was due to chance (sampling error). Stated: There will be NO differences between plain and peanut M & M yummy ratings.

17 Null vs. Alternative The null hypothesis assumes NO effect.
Your alternative hypothesis assumes there will be a difference or an effect. Directional = Plain > Peanut Non-directional = Plain ≠ Peanut

18 Testing our Null Hypothesis
To test this we need a sampling distribution based on this type of Null Hypothesis. No difference between two means

19 Types of Sampling Distributions (“magic tables”)
One sample Z score Tests the null assumption that our sample mean is equal to some pre-determined mean. T-distribution (Student’s t-test) Tests the null assumption that two sample means are similar enough that they probably came from the same population. ANOVA (F test) Tests the null hypothesis that more than two sample means are equal. Correlation (r) Tests the null hypothesis that two variables are unrelated.

20 5 Steps to test your hypothesis
1. State the Null Peanut M&M mean = Plain M&M mean. The alternative is always your research hypothesis. Group A >/< than Group B We don’t actually state the null in our papers, but it is assumed since we can only disprove the null; you can never prove your alternative.

21 2. Select your statistic Examples t-test ANOVA Correlation Chi-Square
For our example, we will use a t-test since it uses an appropriate sampling distribution for our null hypothesis.

22 3. Calculate Calculate the actual difference in group means and standardize this mean difference. You could use software, such as SPSS, on-line calculators for common statistics or compute by hand. We’ll use this online calculator.

23 4. Determine Probability
Determine the probability that your findings could occur by chance alone. (Refer to a distribution table or get the exact p-value from a stats program.) You’ll need the degrees of freedom too (based on the size of groups). (our example: DF = n1 + n2 minus 2)

24 5. Draw your conclusion. p < .05; reject the null and accept your hypothesis p > .05; accept the null MEMORIZE THIS!

25 5. Draw your conclusion from the p-value
p-value stands for probability and is equivalent to a percentage. e.g., p = .03 (3% probability) e.g., p = .45 (45% probability) p < .05; reject the null and accept your hypothesis p > .05; accept the null MEMORIZE THIS!

26 Review: How to Use a Stats Table
Obtain your “computed” t-value by calculating a t-test. Look up the “critical” t-value in a table for the p-value associated with the level of risk you are willing to take (using your df to find the right number). If your computed t-value is equal to or greater than the critical t-value in the table then your results are unlikely to have happened by chance!

27

28 Write it up in APA style There was/was not a significant difference in the mean yummy ratings between peanut and plain M & Ms, t(#), #, p = #. The yummy rating for peanuts (M = #, SD = #) compared to plain (M = #, SD = #). # = DF (number of responses) # = calculated statistic result # = probability value associated with that statistical result

29 Quick Quiz 1. Is a p-value of .50 statistically significant? 2. What is the probability (percentage) associated with a p-value = If the p-value = .04 will you accept or reject the null? 4. What could happen to your sampling error if you widen your sample to include international students?

30 Understanding Significance
If the probability that a particular difference or relationship could be found by chance alone is small then it is statistically significant. Level of significance is arbitrary (why 5%?). Alpha (5%) is set by an experimenter. Is not the same as practical significance.

31 One-Tailed vs. Two Tailed
One-tailed: directional; you only have to look at 1 side of the distribution. Alpha = .05 Two-tailed: nondirectional; you have to look at both sides of the distribution. Alpha = .025 Tables vs. SPSS

32 Errors in Inferential Conclusions
Type I: Assuming your IV had an effect when it was just due to chance. Related directly to the p -value or alpha Type II: Assuming your IV didn’t have an effect when it did (perhaps just too small to detect). Who knows how often this happens!

33 Common Inferential Tests
t-test (t) Tests differences between 2 group means Correlation (r) Tests for the strength of relationship between 2 variables Analysis of Variance (ANOVA) (F) Tests differences between 3 or more group means Each test requires its own unique theoretical distribution and its own formula for standardizing scores to it.

34 Let’s Practice My hypothesis is that Men will rate attractiveness as more important than women when selecting a mate. What distribution will I use? (t, F, r) If I survey 20 men and 22 women what is my DF? What will my critical value be? .05 .01 If my computed value = 1.80, should I accept or reject my hypothesis?

35 Let’s Practice My hypothesis is that people will react differently to negative messages depending on whether they are sent by text, or handwritten note. What distribution should I use? t, F, r If I have 33 people in all, what is my DF? What is my critical value if alpha is .05 .01 If my computed value = 4.1, should I reject or accept my hypothesis?

36 Places to Go to Learn More


Download ppt "Inferential Statistics"

Similar presentations


Ads by Google