Hand in your Homework Assignment.

Slides:



Advertisements
Similar presentations
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Advertisements

Modern Languages Row A Row B Row C Row D Row E Row F Row G Row H Row J Row K Row L Row M
Stage Screen Row B Gallagher Theater Row R Lecturer’s desk Row A Row B Row C
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2015 Room 150 Harvill.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Modern Languages Row A Row B Row C Row D Row E Row F Row G Row H Row J Row K Row L Row M
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill.
Lecturer’s desk INTEGRATED LEARNING CENTER ILC 120 Screen Row A Row B Row C Row D Row E Row F Row G Row.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Please hand in Project 4 To your TA.
Modern Languages Projection Booth Screen Stage Lecturer’s desk broken
Physics- atmospheric Sciences (PAS) - Room 201
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Fall 2015 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Physics- atmospheric Sciences (PAS) - Room 201
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2018 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2017 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
INTEGRATED LEARNING CENTER
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2018 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Screen Stage Lecturer’s desk Gallagher Theater Row A Row A Row A Row B
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2018 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2018 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2018 Room 150 Harvill Building 10: :50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays.
Presentation transcript:

Hand in your Homework Assignment

Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2017 Room 150 Harvill Building 10:00 - 10:50 Mondays, Wednesdays & Fridays. Welcome

Lecturer’s desk Projection Booth Screen Screen Harvill 150 renumbered Row A 15 14 Row A 13 12 11 10 9 8 7 6 5 4 3 2 1 Row A Row B 23 22 21 20 Row B 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row B Row C 25 24 23 22 21 Row C 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row C Row D 29 28 27 26 25 24 23 Row D 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row D Row E 31 30 29 28 27 26 25 24 23 Row E 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row E Row F 35 34 33 32 31 30 29 28 27 26 Row F 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row F Row G 35 34 33 32 31 30 29 28 27 26 Row G 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row G Row H 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 Row H 12 11 10 9 8 7 6 5 4 3 2 1 Row H 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 Row J 13 12 11 10 9 8 7 6 5 4 3 2 1 Row J 41 40 39 38 37 36 35 34 33 32 31 30 29 Row K 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row K Row L 33 32 31 30 29 28 27 26 25 Row L 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row L Row M 21 20 19 Row M 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row M Row N 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row P 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Harvill 150 renumbered table 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Projection Booth Left handed desk

A note on doodling

Start Project 3 Next Week Lab sessions Everyone will want to be enrolled in one of the lab sessions Labs all done with Presentations Start Project 3 Next Week

Schedule of readings Before next exam (November 17th) Please read chapters 1 - 11 in OpenStax textbook Please read Chapters 2, 3, and 4 in Plous Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence

Preview of homework assignment

Preview of homework assignment

Rejecting the null hypothesis The result is “statistically significant” if: the observed statistic is larger than the critical statistic observed stat > critical stat If we want to reject the null, we want our t (or z or r or F or x2) to be big!! the p value is less than 0.05 (which is our alpha) p < 0.05 If we want to reject the null, we want our “p” to be small!! we reject the null hypothesis then we have support for our alternative hypothesis Review

Deciding whether or not to reject the null hypothesis. 05 versus Deciding whether or not to reject the null hypothesis .05 versus .01 alpha levels What if our observed z = 2.0? How would the critical z change? α = 0.05 Significance level = .05 α = 0.01 Significance level = .01 -1.96 or +1.96 p < 0.05 Yes, Significant difference Reject the null Remember, reject the null if the observed z is bigger than the critical z -2.58 or +2.58 Not a Significant difference Do not Reject the null Review

How would the critical z change? One versus two tail test of significance: Comparing different critical scores (but same alpha level – e.g. alpha = 5%) One versus two tailed test of significance 1.64 95% 95% 5% 2.5% 2.5% How would the critical z change? Pros and cons… Review

One versus two tail test of significance 5% versus 1% alpha levels What if our observed z = 2.45? How would the critical z change? One-tailed Two-tailed α = 0.05 Significance level = .05 α = 0.01 Significance level = .01 -1.64 or +1.64 -1.96 or +1.96 Remember, reject the null if the observed z is bigger than the critical z Reject the null Reject the null -2.33 or +2.33 -2.58 or +2.58 Reject the null Do not Reject the null Review

We are looking to compare two means Study Type 2: t-test Comparing Two Means? Use a t-test We are looking to compare two means http://www.youtube.com/watch?v=n4WQhJHGQB4

We are looking to compare two means Study Type 2: t-test Comparing Two Means? Use a t-test We are looking to compare two means http://www.youtube.com/watch?v=n4WQhJHGQB4

Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule Alpha level? (α = .05 or .01)? One or two tailed test? Balance between Type I versus Type II error Critical statistic (e.g. z or t or F or r) value? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem

We lose one degree of freedom for every parameter we estimate Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

A note on z scores, and t score: . . A note on z scores, and t score: Numerator is always distance between means (how far away the distributions are or “effect size”) Denominator is always measure of variability (how wide or much overlap there is between distributions) Difference between means Difference between means Variability of curve(s) (within group variability) Variability of curve(s)

Effect size is considered relative to variability of distributions . Effect size is considered relative to variability of distributions 1. Larger variance harder to find significant difference Treatment Effect x Treatment Effect 2. Smaller variance easier to find significant difference x

Effect size is considered relative to variability of distributions . Effect size is considered relative to variability of distributions Treatment Effect x Difference between means Treatment Effect x Variability of curve(s) (within group variability)

A note on variability versus effect size Difference between means . A note on variability versus effect size Difference between means Difference between means Variability of curve(s) Variability of curve(s) (within group variability)

A note on variability versus effect size Difference between means . A note on variability versus effect size Difference between means Difference between means . Variability of curve(s) Variability of curve(s) (within group variability)

Hypothesis testing: A review . Difference between means Hypothesis testing: A review Variability of curve(s) If the observed stat is more extreme than the critical stat in the distribution (curve): then it is so rare, (taking into account the variability) we conclude it must be from some other distribution decision considers effect size and variability then we reject the null hypothesis – we have a significant result then we have support for our alternative hypothesis p < 0.05 (p < α) If the observed stat is NOT more extreme than the critical stat in the distribution (curve): then we know it is a common score (either because the effect size is too small or because the variability is to big) and is likely to be part of this null distribution, we conclude it must be from this distribution decision considers effect size and variability – could be overly variable then we do not reject the null hypothesis then we do not have support for our alternative hypothesis p not less than 0.05 (p not less than α) p is n.s. Difference between means critical statistic critical statistic Variability of curve(s) (within group variability) Variability of curve(s) Review

Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) How is a t score different than a z score? Describe the null and alternative hypotheses Step 2: Decision rule: find “critical z” score Alpha level? (α = .05 or .01)? One versus two-tailed test Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Population versus sample standard deviation Population versus sample standard deviation Step 5: Conclusion - tie findings back in to research problem

Comparing z score distributions with t-score distributions z-scores Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom

Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Critical t (just like critical z) separates common from rare scores Critical t used to define both common scores “confidence interval” and rare scores “region of rejection”

Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

Comparing z score distributions with t-score distributions Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table)

A quick re-visit with the law of large numbers Relationship between increased sample size decreased variability smaller “critical values” As n goes up variability goes down

Law of large numbers: As the number of measurements increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group) http://www.youtube.com/watch?v=ne6tB2KiZuk

We use degrees of freedom (df) to approximate sample size Interpreting t-table We use degrees of freedom (df) to approximate sample size Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions This t-table presents useful values for distributions (organized by degrees of freedom) Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve n = 17 n = 5 . Remember these useful values for z-scores? 1.64 1.96 2.58

Area between two scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df

useful values for z-scores? . Area between two scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df Notice with large sample size it is same values as z-score Remember these useful values for z-scores? . 1.96 2.58 1.64

Hypothesis testing: one sample t-test Let’s jump right in and do a t-test Hypothesis testing: one sample t-test Is the mean of my observed sample consistent with the known population mean or did it come from some other distribution? We are given the following problem: 800 students took a chemistry exam. Accidentally, 25 students got an additional ten minutes. Did this extra time make a significant difference in the scores? The average number correct by the large class was 74. The scores for the sample of 25 was Please note: In this example we are comparing our sample mean with the population mean (One-sample t-test) 76, 72, 78, 80, 73 70, 81, 75, 79, 76 77, 79, 81, 74, 62 95, 81, 69, 84, 76 75, 77, 74, 72, 75

µ = 74 µ Hypothesis testing Step 1: Identify the research problem / hypothesis Did the extra time given to this sample of students affect their chemistry test scores Describe the null and alternative hypotheses One tail or two tail test? Ho: µ = 74 µ = 74 H1:

We use a different table for t-tests Hypothesis testing Step 2: Decision rule = .05 n = 25 Degrees of freedom (df) = (n - 1) = (25 - 1) = 24 two tail test This was for z scores We use a different table for t-tests

two tail test α= .05 (df) = 24 Critical t(24) = 2.064

µ = 74 Hypothesis testing = = 868.16 = 6.01 24 x (x - x) (x - x)2 76 72 78 80 73 70 81 75 79 77 74 62 95 69 84 76 – 76.44 72 – 76.44 78 – 76.44 80 – 76.44 73 – 76.44 70 – 76.44 81 – 76.44 75 – 76.44 79 – 76.44 77 – 76.44 74 – 76.44 62 – 76.44 95 – 76.44 69 – 76.44 84 – 76.44 = -0.44 = -4.44 = +1.56 = + 3.56 = -3.44 = -6.44 = +4.56 = -1.44 = +2.56 = -0.44 = +0.56 = -2.44 = -14.44 = +18.56 = -7.44 = +7.56 0.1936 19.7136 2.4336 12.6736 11.8336 41.4736 20.7936 2.0736 6.5536 0.3136 5.9536 208.5136 344.4736 55.3536 57.1536 Step 3: Calculations µ = 74 Σx = N 1911 25 = = 76.44 N = 25 = 6.01 868.16 24 Σx = 1911 Σ(x- x) = 0 Σ(x- x)2 = 868.16

µ = 74 Hypothesis testing = 76.44 - 74 1.20 2.03 . Step 3: Calculations µ = 74 = 76.44 N = 25 s = 6.01 76.44 - 74 = 76.44 - 74 1.20 2.03 critical t 6.01 25 Observed t(24) = 2.03

Hypothesis testing Step 4: Make decision whether or not to reject null hypothesis Observed t(24) = 2.03 Critical t(24) = 2.064 2.03 is not farther out on the curve than 2.064, so, we do not reject the null hypothesis Step 6: Conclusion: The extra time did not have a significant effect on the scores

Hypothesis testing: Did the extra time given to these 25 students affect their average test score? Start summary with two means (based on DV) for two levels of the IV notice we are comparing a sample mean with a population mean: single sample t-test Finish with statistical summary t(24) = 2.03; ns Describe type of test (t-test versus z-test) with brief overview of results Or if it had been different results that *were* significant: t(24) = -5.71; p < 0.05 The mean score for those students who where given extra time was 76.44 percent correct, while the mean score for the rest of the class was only 74 percent correct. A t-test was completed and there appears to be no significant difference in the test scores for these two groups t(24) = 2.03; n.s. Type of test with degrees of freedom n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Value of observed statistic 42

Thank you! See you next time!!