Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented by Mo Geraghty and Danny Tran June 10, 2016.

Similar presentations


Presentation on theme: "Presented by Mo Geraghty and Danny Tran June 10, 2016."— Presentation transcript:

1 Presented by Mo Geraghty and Danny Tran June 10, 2016

2  Q:Why do so many colleges and grad schools teach p =0.05?  A: Because that’s still what the scientific community and journal editors use.  Q:Why do so many people still use p = 0.05?  A:Because that’s what they were taught in college or grad school.  ASA statement ASA statement  - George Cobb, Professor Emeritus of Mathematics and Statistics - Mt Holyhoke College

3

4  The p-value is not the probability that the null hypothesis is true or the probability that the alternative hypothesis is false. It is not connected to either.  The p-value is not the probability that a finding is "merely a fluke."  The p-value is not the probability of falsely rejecting the null hypothesis.  The p-value is not the probability that replicating the experiment would yield the same conclusion.  The significance level, such as 0.05, is not determined by the p-value.  The p-value does not indicate the size or importance of the observed effect.  Misconceptions about p-value has its own Wikipedia page Misconceptions about p-value has its own Wikipedia page

5  Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.

6  P-values can indicate how incompatible the data are with a specified statistical model.  The smaller the p-value, the greater the statistical incompatibility of the data with the null hypothesis, if the underlying assumptions used to calculate the p-value hold.  This incompatibility can be interpreted as casting doubt on or providing evidence against the null hypothesis or the underlying assumptions.

7  P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.  The p-value is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself.

8  p-value is not P(Ho is true | getting data this extreme)  p-value is P(getting data this extreme | Ho is true)

9  Suppose there is a 5% probability that a research hypothesis (Ha) is true (prior).  You conduct the test with 90% power.  The p-value of the test is 0.04  Using Bayes’ Rule:

10  Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold  A conclusion does not immediately become “true” on one side of the divide and “false” on the other.  Researchers should bring many contextual factors into play to derive scientific inferences, including the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis.

11  Proper inference requires full reporting and transparency  p-values and related analyses should not be reported selectively.  Cherry picking promising findings, also known by such terms as data dredging, significance chasing, significance questing, selective inference, and “p-hacking,” leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided.  Example of p-hacking (from xkcd) Example of p-hacking (from xkcd)

12  A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.  Statistical significance is not equivalent to scientific, human, or economic significance.  Smaller p-values do not necessarily imply the presence of larger or more important effects, and larger p-values do not imply a lack of importance or even lack of effect.  Some research journals no longer look at p-values, but instead look at effect sizes. Some research journals no longer look at p-values, but instead look at effect sizes.

13  By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.  Researchers should recognize that a p-value without context or other evidence provides limited information.  A relatively large p-value does not imply evidence in favor of the null hypothesis; many other hypotheses may be equally or more consistent with the observed data.

14  Methods that emphasize estimation over testing, such as confidence, credibility, or prediction intervals  Bayesian methods  Alternative measures of evidence, such as likelihood ratios or Bayes Factors  Other approaches such as decision-theoretic modeling and false discovery rates

15  Good statistical practice, as an essential component of good scientific practice, emphasizes: ◦ principles of good study design and conduct ◦ a variety of numerical and graphical summaries of data ◦ understanding of the phenomenon under study ◦ Interpretation of results in context ◦ complete reporting ◦ Proper logical and quantitative understanding of what data summaries mean  No single index should substitute for scientific reasoning.

16  http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108 http://www.stat.columbia.edu/~gelman/research/published/pvalues3.pdf http://jnci.oxfordjournals.org/content/99/4/332.full http://fivethirtyeight.com/features/statisticians-found-one-thing-they-can- agree-on-its-time-to-stop-misusing-p-values/ http://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p- values/ http://www.vox.com/2016/5/9/11638808/john-oliver-science-studies-last- week-tonight http://freakonometrics.hypotheses.org/19817 http://www.bayesianphilosophy.com/the-myth-of-p-hacking/ https://bayesianbiologist.com/2011/08/21/p-value-fallacy-on-more-or-less/ https://xkcd.com/1478/ http://amstat.tandfonline.com/doi/pdf/10.1080/00031305.2016.1154108 http://www.stat.columbia.edu/~gelman/research/published/pvalues3.pdf http://jnci.oxfordjournals.org/content/99/4/332.full http://fivethirtyeight.com/features/statisticians-found-one-thing-they-can- agree-on-its-time-to-stop-misusing-p-values/ http://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p- values/ http://www.vox.com/2016/5/9/11638808/john-oliver-science-studies-last- week-tonight http://freakonometrics.hypotheses.org/19817 http://www.bayesianphilosophy.com/the-myth-of-p-hacking/ https://bayesianbiologist.com/2011/08/21/p-value-fallacy-on-more-or-less/ https://xkcd.com/1478/


Download ppt "Presented by Mo Geraghty and Danny Tran June 10, 2016."

Similar presentations


Ads by Google