Presentation is loading. Please wait.

Presentation is loading. Please wait.

Physics- atmospheric Sciences (PAS) - Room 201

Similar presentations


Presentation on theme: "Physics- atmospheric Sciences (PAS) - Room 201"— Presentation transcript:

1 Physics- atmospheric Sciences (PAS) - Room 201
s c r e e n s c r e e n Lecturer’s desk 19 18 17 16 15 14 Row A 13 12 11 10 9 8 7 Row A 6 5 4 3 2 1 Row A 20 19 18 17 16 15 Row B 14 13 12 11 10 9 8 7 Row B 6 5 4 3 2 1 Row B 21 20 19 18 17 16 Row C 15 14 13 12 11 10 9 8 7 Row C 6 5 4 3 2 1 Row C 22 21 20 19 18 17 Row D 16 15 14 13 12 11 10 9 8 7 Row D 6 5 4 3 2 1 Row D 23 22 21 20 19 18 Row E 17 16 15 14 13 12 11 10 9 8 7 Row E 6 5 4 3 2 1 Row E 23 22 21 20 19 18 Row F 17 16 15 14 13 12 11 10 9 8 7 Row F 6 5 4 3 2 1 Row F 24 23 22 21 20 19 Row G 18 17 16 15 14 13 12 11 10 9 8 7 Row G 6 5 4 3 2 1 Row G 22 21 20 19 18 17 Row H 16 15 14 13 12 11 10 9 8 7 Row H 6 5 4 3 2 1 Row H table 26 25 24 23 22 Row J 21 20 19 18 14 13 table 9 8 7 6 5 1 Row J 27 26 25 24 23 Row K 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row K 28 27 26 25 24 Row L 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row L 28 27 26 25 24 Row M 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row M 30 29 28 27 26 Row N 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row N 30 29 28 27 26 Row P 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row P 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Row Q Physics- atmospheric Sciences (PAS) - Room 201

2 MGMT 276: Statistical Inference in Management Fall 2015
Welcome

3

4 Schedule of readings Before our next exam (November 10th)
OpenStax Chapters 1 – 10 and Chapter 13 Plous (2, 3, & 4) Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence

5 Homework due – Thursday (October 29th)
On class website: Please print and complete homework worksheet #12 Hypothesis testing with z-tests and t-tests

6 When: Monday evening October 19th
Stats Review for Exam 2 by Jonathon & Nick When: Monday evening October 19th 6:30 – 7:30pm Room: ILC 120 Cost: $5.00 Which of the following best describes your experience with the review session? a. It was a very helpful review session b. It was only okay c. It was not a helpful review session d. I did not attend this review

7 By the end of lecture today 10/27/15
Logic of hypothesis testing Steps for hypothesis testing Levels of significance (Levels of alpha) what does p < 0.05 mean? what does p < 0.01 mean? Hypothesis testing with z-score and t-scores (one-sample) Hypothesis testing with t-scores (two independent samples) Constructing brief, complete summary statements

8

9 It went really well! Exam 2 – Thanks for your patience and cooperation
The grades are posted

10 Frequency Score on Exam Remember… In a negatively skewed distribution:
mean < median < mode Mode 87.5 = mode = tallest point 85 = median = middle score 82 = mean = balance point Frequency Score on Exam Mean Median Note: Always “frequency” Note: Label and Numbers

11 Five steps to hypothesis testing
Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule Alpha level? (α = .05 or .01)? One or two tailed test? Balance between Type I versus Type II error Critical statistic (e.g. z or t or F or r) value? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem

12 We lose one degree of freedom for every parameter we estimate
Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

13 A note on z scores, and t score:
. . A note on z scores, and t score: Numerator is always distance between means (how far away the distributions are or “effect size”) Denominator is always measure of variability (how wide or much overlap there is between distributions) Difference between means Difference between means Variability of curve(s) (within group variability) Variability of curve(s)

14 A note on variability versus effect size Difference between means
. A note on variability versus effect size Difference between means Difference between means Variability of curve(s) Variability of curve(s) (within group variability)

15 A note on variability versus effect size Difference between means
. A note on variability versus effect size Difference between means Difference between means . Variability of curve(s) Variability of curve(s) (within group variability)

16 Effect size is considered relative to variability of distributions
. Effect size is considered relative to variability of distributions 1. Larger variance harder to find significant difference Treatment Effect x Treatment Effect 2. Smaller variance easier to find significant difference x

17 Effect size is considered relative to variability of distributions
. Effect size is considered relative to variability of distributions Treatment Effect x Difference between means Treatment Effect x Variability of curve(s) (within group variability)

18 Five steps to hypothesis testing
Step 1: Identify the research problem (hypothesis) How is a t score different than a z score? Describe the null and alternative hypotheses Step 2: Decision rule: find “critical z” score Alpha level? (α = .05 or .01)? One versus two-tailed test Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Population versus sample standard deviation Population versus sample standard deviation Step 5: Conclusion - tie findings back in to research problem

19 Comparing z score distributions with t-score distributions
z-scores Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom

20 Comparing z score distributions with t-score distributions
Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

21 Comparing z score distributions with t-score distributions
Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Critical t (just like critical z) separates common from rare scores Critical t used to define both common scores “confidence interval” and rare scores “region of rejection”

22 Comparing z score distributions with t-score distributions
Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

23 Comparing z score distributions with t-score distributions
Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table)

24 A quick re-visit with the law of large numbers
Relationship between increased sample size decreased variability smaller “critical values” As n goes up variability goes down

25 Law of large numbers: As the number of measurements
increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group)

26 We use degrees of freedom (df) to approximate sample size
Interpreting t-table We use degrees of freedom (df) to approximate sample size Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions This t-table presents useful values for distributions (organized by degrees of freedom) Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve n = 17 n = 5 . Remember these useful values for z-scores? 1.64 1.96 2.58

27 Area between two scores Area between two scores
Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df

28 useful values for z-scores? .
Area between two scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df Notice with large sample size it is same values as z-score Remember these useful values for z-scores? . 1.96 2.58 1.64

29 Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

30 These would be helpful to know by heart – please memorize
Pop Quiz – Part 1 Standard deviation and Variance For Sample and Population These would be helpful to know by heart – please memorize these formula

31 Pop Quiz – Part 1 Standard deviation and Variance For Sample and Population Part 2: When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - What affect does this have on the hypothesis test (easier or harder to reject the null?)

32 Pop Quiz – Part 3 1.  When do we use a t-test and when do we use a z-test?         (Be sure to write out the formulae) 2.  How many steps in hypothesis testing   (What are they?) 3.  What is our formula for degrees of freedom in one sample t-test? 4.  We lose one degree of freedom for every ________________ 5.  What are the three parts to the summary (below) The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s. 32

33 Pop Quiz Standard deviation and Variance For Sample and Population Critical value gets smaller Part 2: When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - What affect does this have on the hypothesis test (easier or harder to reject the null?) Gets easier to reject the null

34 Pop Quiz Writing Assignment
1.  When do we use a t-test and when do we use a z-test?         (Be sure to write out the formulae) Use the t-test when you don’t know the standard deviation of the population, and therefore have to estimate it using the standard deviation of the sample Population versus sample standard deviation Population versus sample standard deviation 34

35 Five steps to hypothesis testing
Step 1: Identify the research problem (hypothesis) How is a t score similar to a z score? How is a t score different than a z score? Describe the null and alternative hypotheses Same logic and same steps Step 2: Decision rule: find “critical z” score Alpha level? (α = .05 or .01)? One versus two-tailed test Step 3: Calculate observed z score Step 4: Compare “observed z” with “critical z” If observed z > critical z then reject null p < 0.05 and we have significant finding Step 5: Conclusion - tie findings back in to research problem

36 Writing Assignment 3.  What is our formula for degrees of freedom in one sample t-test? One sample t-test Degrees of freedom =(df) = (n – 1) First Sample Second Sample Two sample t-test Degrees of freedom (df ) = (n1 - 1) + (n2 – 1) 4.  We lose one degree of freedom for every parameter we estimate Use the word "parameter” when describing a whole population (not just a sample). Usually we don’t know about the whole population so we have guess by using what we know about our sample. A short-hand way to let the reader know it we are describing a population (a parameter) is to use a Greek letter – for example, σ for populations standard deviation, and an s for the sample. In a t-test we never know the population standard deviation (parameter σ) we have to estimate this one parameter (using “s”), so we lose one df our degree of freedom is n-1 Sample standard deviation Parameter: Population standard deviation 36

37 Type of test with degrees of freedom Value of observed statistic
Writing Assignment 5.  What are the three parts to the summary (below) Finish with statistical summary t(4) = 1.96; ns Start summary with two means (based on DV) for two levels of the IV Or if it *were* significant: t(9) = 3.93; p < 0.05 The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s. Describe type of test (t-test versus anova) with brief overview of results Type of test with degrees of freedom n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Value of observed statistic 37

38

39 Hypothesis testing: one sample t-test
Let’s jump right in and do a t-test Hypothesis testing: one sample t-test Is the mean of my observed sample consistent with the known population mean or did it come from some other distribution? We are given the following problem: 800 students took a chemistry exam. Accidentally, 25 students got an additional ten minutes. Did this extra time make a significant difference in the scores? The average number correct by the large class was 74. The scores for the sample of 25 was Please note: In this example we are comparing our sample mean with the population mean (One-sample t-test) 76, 72, 78, 80, 73 70, 81, 75, 79, 76 77, 79, 81, 74, 62 95, 81, 69, 84, 76 75, 77, 74, 72, 75

40 µ = 74 µ Hypothesis testing
Step 1: Identify the research problem / hypothesis Did the extra time given to this sample of students affect their chemistry test scores Describe the null and alternative hypotheses One tail or two tail test? Ho: µ = 74 = 74 H1:

41 We use a different table for t-tests
Hypothesis testing Step 2: Decision rule = .05 n = 25 Degrees of freedom (df) = (n - 1) = (25 - 1) = 24 two tail test This was for z scores We use a different table for t-tests

42 two tail test α= .05 (df) = 24 Critical t(24) = 2.064

43 µ = 74 Hypothesis testing = = 868.16 = 6.01 24 x (x - x) (x - x)2
76 72 78 80 73 70 81 75 79 77 74 62 95 69 84 76 – 76.44 72 – 76.44 78 – 76.44 80 – 76.44 73 – 76.44 70 – 76.44 81 – 76.44 75 – 76.44 79 – 76.44 77 – 76.44 74 – 76.44 62 – 76.44 95 – 76.44 69 – 76.44 84 – 76.44 = -0.44 = = = = = = = = = = = = = = = 0.1936 2.4336 2.0736 6.5536 0.3136 5.9536 Step 3: Calculations µ = 74 Σx = N 1911 25 = = 76.44 N = 25 = 6.01 868.16 24 Σx = 1911 Σ(x- x) = 0 Σ(x- x)2 =

44 µ = 74 Hypothesis testing = 76.44 - 74 1.20 2.03 .
Step 3: Calculations µ = 74 = 76.44 N = 25 s = 6.01 = 1.20 2.03 critical t 6.01 25 Observed t(24) = 2.03

45 Hypothesis testing Step 4: Make decision whether or not to reject null hypothesis Observed t(24) = 2.03 Critical t(24) = 2.064 2.03 is not farther out on the curve than 2.064, so, we do not reject the null hypothesis Step 6: Conclusion: The extra time did not have a significant effect on the scores

46 Hypothesis testing: Did the extra time given to these 25 students affect their average test score? Start summary with two means (based on DV) for two levels of the IV notice we are comparing a sample mean with a population mean: single sample t-test Finish with statistical summary t(24) = 2.03; ns Describe type of test (t-test versus z-test) with brief overview of results Or if it had been different results that *were* significant: t(24) = -5.71; p < 0.05 The mean score for those students who where given extra time was percent correct, while the mean score for the rest of the class was only 74 percent correct. A t-test was completed and there appears to be no significant difference in the test scores for these two groups t(24) = 2.03; n.s. Type of test with degrees of freedom n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Value of observed statistic 46

47 Thank you! See you next time!!


Download ppt "Physics- atmospheric Sciences (PAS) - Room 201"

Similar presentations


Ads by Google