Statistical vs Clinical or Practical Significance

Slides:



Advertisements
Similar presentations
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Advertisements

Introduction to Hypothesis Testing
Section 10.2: Tests of Significance
Chapter 10 Introduction to Inference
Introductory Mathematics & Statistics for Business
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Client Assessment and Other New Uses of Reliability Will G Hopkins Physiology and Physical Education University of Otago, Dunedin NZ Reliability: the Essentials.
Planning, Performing, and Publishing Research with Confidence Limits
Statistical vs Clinical Significance
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Critical review of significance testing F.DAncona from a Alain Morens lecture 2006.
SADC Course in Statistics Linking tests to confidence intervals (and other issues) (Session 10)
Chapter 17/18 Hypothesis Testing
HYPOTHESIS TESTING. Purpose The purpose of hypothesis testing is to help the researcher or administrator in reaching a decision concerning a population.
Chapter 7 Hypothesis Testing
Inferential Statistics and t - tests
Hypothesis Tests Steps and Notation (1-Sample)
Power and sample size.
Type I & Type II errors Brian Yuen 18 June 2013.
Understanding p-values Annie Herbert Medical Statistician Research and Development Support Unit
Putting Statistics to Work
Inferential Statistics
CHAPTER 15: Tests of Significance: The Basics Lecture PowerPoint Slides The Basic Practice of Statistics 6 th Edition Moore / Notz / Fligner.
CHAPTER 14: Confidence Intervals: The Basics
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
Statistical Analysis and Data Interpretation What is significant for the athlete, the statistician and team doctor? important Will Hopkins
Anthony Greene1 Simple Hypothesis Testing Detecting Statistical Differences In The Simplest Case:  and  are both known I The Logic of Hypothesis Testing:
Statistics.  Statistically significant– When the P-value falls below the alpha level, we say that the tests is “statistically significant” at the alpha.
Issues About Statistical Inference Dr R.M. Pandey Additional Professor Department of Biostatistics All-India Institute of Medical Sciences New Delhi.
Chapter 10: Hypothesis Testing
Cal State Northridge  320 Ainsworth Sampling Distributions and Hypothesis Testing.
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
BCOR 1020 Business Statistics
Thomas Songer, PhD with acknowledgment to several slides provided by M Rahbar and Moataza Mahmoud Abdel Wahab Introduction to Research Methods In the Internet.
INFERENTIAL STATISTICS – Samples are only estimates of the population – Sample statistics will be slightly off from the true values of its population’s.
Example 10.1 Experimenting with a New Pizza Style at the Pepperoni Pizza Restaurant Concepts in Hypothesis Testing.
Confidence Intervals and Hypothesis Testing - II
Statistical Techniques I
Fundamentals of Hypothesis Testing: One-Sample Tests
Tests of significance & hypothesis testing Dr. Omar Al Jadaan Assistant Professor – Computer Science & Mathematics.
Hypothesis Testing.
Lecture 7 Introduction to Hypothesis Testing. Lecture Goals After completing this lecture, you should be able to: Formulate null and alternative hypotheses.
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
Agresti/Franklin Statistics, 1 of 122 Chapter 8 Statistical inference: Significance Tests About Hypotheses Learn …. To use an inferential method called.
 If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Statistical Hypotheses & Hypothesis Testing. Statistical Hypotheses There are two types of statistical hypotheses. Null Hypothesis The null hypothesis,
1 Chapter 9 Hypothesis Testing. 2 Chapter Outline  Developing Null and Alternative Hypothesis  Type I and Type II Errors  Population Mean: Known 
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Chapter 8 Hypothesis Testing.
Lecture 9 Chap 9-1 Chapter 2b Fundamentals of Hypothesis Testing: One-Sample Tests.
Chapter 20 Testing Hypothesis about proportions
Issues concerning the interpretation of statistical significance tests.
CHAPTER 9 Testing a Claim
Hypothesis Testing An understanding of the method of hypothesis testing is essential for understanding how both the natural and social sciences advance.
Fall 2002Biostat Statistical Inference - Confidence Intervals General (1 -  ) Confidence Intervals: a random interval that will include a fixed.
Chap 8-1 Fundamentals of Hypothesis Testing: One-Sample Tests.
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
P-values and statistical inference Dr. Omar Aljadaan.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 9 Testing a Claim 9.1 Significance Tests:
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Making Inferences About Effects Seminar presented at Leeds Beckett and Split universities, March 2016 This slideshow consists of part of the lecture on.
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. 1 FINAL EXAMINATION STUDY MATERIAL III A ADDITIONAL READING MATERIAL – INTRO STATS 3 RD EDITION.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 9 Testing a Claim 9.1 Significance Tests:
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 21 More About Tests and Intervals.
If you are viewing this slideshow within a browser window, select File/Save as… from the toolbar and save the slideshow to your computer, then open it.
Null Hypothesis Testing
Week 11 Chapter 17. Testing Hypotheses about Proportions
Chapter 9: Hypothesis Tests Based on a Single Sample
The end of statistical significance
Presentation transcript:

Statistical vs Clinical or Practical Significance Will G Hopkins Auckland University of Technology Auckland, NZ Statistical significance P values and null hypotheses Confidence limits Precision of estimation Clinical or practical significance Probabilities of benefit and harm Examples

Background Most researchers and students misinterpret statistical significance and non-significance. Few people know the meaning of the P value that defines statistical significance. Reviewers and editors reject some papers with statistically non-significant effects that should be published. Use of confidence limits instead of a P value is only a partial solution to these problems. What's missing is some way to convey the clinical or practical significance of an effect.

Research is a quest for truth. There are several research paradigms. The Research Endeavor Research is a quest for truth. There are several research paradigms. In biomedical and other empirical positivist research… Truth is probabilistic. We study a sample to get an observed value of a statistic representing an interesting effect, such as the relationship between physical activity and health or performance. But we want the true (= population) value of the statistic. The observed value and the variability in the sample allow us to make an inference about the true value. Use of the P value and statistical significance is one approach to making such inferences. Its use-by date was December 31, 1999. There are better ways to make inferences.

Philosophy of Statistical Significance We can disprove, but not prove, things. Therefore, we need something to disprove. Let's assume the true effect is zero: the null hypothesis. If the value of the observed effect is unlikely under this assumption, we reject (disprove) the null hypothesis. "Unlikely" is related to (but not equal to) a probability or P value. P < 0.05 is regarded as unlikely enough to reject the null hypothesis (i.e., to conclude the effect is not zero). We say the effect is statistically significant at the 0.05 or 5% level. P > 0.05 means not enough evidence to reject the null. We say the effect is statistically non-significant. Some folks mistakenly accept the null and conclude "no effect".

Problems with this philosophy We can disprove things only in pure mathematics, not in real life. Failure to reject the null doesn't mean we have to accept the null. In any case, true effects in real life are never zero. Never. Therefore, to assume that effects are zero until disproved is illogical, and sometimes impractical or dangerous. 0.05 is arbitrary. The answer? We need better ways to represent the uncertainties of real life: Better interpretation of the classical P value More emphasis on (im)precision of estimation, through use of likely (= confidence) limits of the true value Better types of P value, representing probabilities of clinical or practical benefit and harm

Traditional Interpretation of the P Value Example: P = 0.20 for an observed positive value of a statistic If the true value is zero, there is a probability of 0.20 of observing a more extreme positive or negative value. probability probability distribution of observed value if true value = 0 P value = 0.1 + 0.1 observed value value of effect statistic positive negative Problem: huh? (Hard to understand.) Problem: everything that's wrong with statistical significance.

Better Interpretation of the P Value For the same data, there is a probability of 0.10 (half the P value) that the true value is negative: observed value probability value of effect statistic positive negative probability distribution of true value given the observed value (P value)/2 = 0.10 Easier to understand, and avoids statistical significance, but… Problem: having to halve the P value is awkward, although could use one-tailed P values directly. Problem: focus is still on zero or null value of the effect.

Confidence (or Likely) Limits of the True Value These define a range within which the true value is likely to fall. "Likely" is usually a probability of 0.95 (defining 95% limits). probability distribution of true value given the observed value observed value probability value of effect statistic positive negative Area = 0.95 lower likely limit upper likely limit Problem: 0.95 is arbitrary and gives an impression of imprecision. 0.90, 0.68, or even 0.50 would be better… Problem: still have to assess the upper and lower limits and the observed value in relation to clinically important values.

Clinical Significance Statistical significance focuses on the null value of the effect. More important is clinical significance defined by the smallest clinically beneficial and harmful values of the effect. These values are usually equal and opposite in sign. Example: smallest clinically harmful value value of effect statistic positive negative smallest clinically beneficial value observed value We now combine these values with the observed value to make a statement about clinical significance.

value of effect statistic The smallest clinically beneficial and harmful values define probabilities that the true effect could be clinically beneficial, trivial, or harmful (Pbeneficial, Ptrivial, Pharmful). Pbeneficial = 0.80 smallest clinically beneficial value These Ps make an effect easier to assess and (hopefully) to publish. Warning: these Ps are NOT the proportions of + ive, non- and - ive responders in the population. The calculations are easy. Put the observed value, smallest beneficial/harmful value, and P value into the confidence-limits spreadsheet at newstats.org. More challenging: choosing the smallest clinically important value, interpreting the probabilities, and publishing the work. probability value of effect statistic positive negative observed value Ptrivial = 0.15 Pharmful = 0.05 smallest clinically harmful value

How to Report Clinical Significance of Outcomes Examples for a minimum worthwhile change of 2.0 units. Example 1–clinically beneficial, statistically non-significant (see previous slide; inappropriately rejected by editors): The observed effect of the treatment was 6.0 units (90% likely limits –1.8 to 14 units; P = 0.20). The chances that the true effect is practically beneficial/trivial/harmful are 80/15/5%. Example 2–clinically beneficial, statistically significant (no problem with publishing): The observed effect of the treatment was 3.3 units (90% likely limits 1.3 to 5.3 units; P = 0.007). The chances that the true effect is practically beneficial/trivial/harmful are 87/13/0%.

Example 3–clinically unclear, statistically non-significant (the worst kind of outcome, due to small sample or large error of measurement; usually rejected, but could/should be published to contribute to a future meta-analysis): The observed effect of the treatment was 2.7 units (90% likely limits –5.9 to 11 units; P = 0.60). The chances that the true effect is practically beneficial/trivial/harmful are 55/26/18%. Example 4–clinically unclear, statistically significant (good publishable study; true effect is on the borderline of beneficial): The observed effect of the treatment was 1.9 units (90% likely limits 0.4 to 3.4 units; P = 0.04). The chances that the true effect is practically beneficial/trivial/harmful are 46/54/0%.

Example 5–clinically trivial, statistically significant (publishable rare outcome that can arise from a large sample size; usually misinterpreted as a worthwhile effect): The observed effect of the treatment was 1.1 units (90% likely limits 0.4 to 1.8 units; P = 0.007). The chances that the true effect is practically beneficial/trivial/harmful are 1/99/0%. Example 6–clinically trivial, statistically non-significant (publishable, but sometimes not submitted or accepted): The observed effect of the treatment was 0.3 units (90% likely limits –1.7 to 2.3 units; P = 0.80). The chances that the true effect is practically beneficial/trivial/harmful are 8/89/3%.

Qualitative Interpretation of Probabilities Need to describe outcomes in plain language. Therefore need to describe probabilities that the effect is beneficial, trivial, and/or harmful. Suggested schema: The effect… beneficial/trivial/harmful is not…, is almost certainly not… Probability <0.01 Chances <1% Odds <1:99 is very unlikely to be… 0.01–0.05 1–5% 1:99–1:19 is unlikely to be…, is probably not… 0.05–0.25 5–25% 1:19–1:3 is possibly (not)…, may (not) be… 0.25–0.75 25–75% 1:3–3:1 is likely to be…, is probably… is very likely to be… is…, is almost certainly… 0.75–0.95 0.95–0.99 >0.99 75–95% 95–99% >99% 3:1–19:1 19:1–99:1 >99:1

Summary When you report your research… Show the observed magnitude of the effect. Attend to precision of estimation by showing likely limits of the true value. Show the P value if you must, but do not test a null hypothesis and do not mention statistical significance. Attend to clinical or practical significance by stating the smallest clinically beneficial and/or harmful value then showing the probabilities that the true effect is beneficial, trivial, and harmful. Make a qualitative statement about the clinical or practical significance of the effect, using unlikely, very likely, and so on.

A New View of Statistics This presentation was downloaded from: A New View of Statistics newstats.org SUMMARIZING DATA GENERALIZING TO A POPULATION Simple & Effect Statistics Precision of Measurement Confidence Limits Statistical Models Dimension Reduction Sample-Size Estimation