Presentation is loading. Please wait.

Presentation is loading. Please wait.

KUMC Departments of Biostatistics & Internal Medicine

Similar presentations


Presentation on theme: "KUMC Departments of Biostatistics & Internal Medicine"— Presentation transcript:

1 Introduction to Biostatistics for Clinical and Translational Researchers
KUMC Departments of Biostatistics & Internal Medicine University of Kansas Cancer Center FRONTIERS: The Heartland Institute of Clinical and Translational Research

2 Course Information Jo A. Wick, PhD
Office Location: Robinson Lectures are recorded and posted at under ‘Events & Lectures’

3 Objectives Understand the role of statistics in the scientific process and how it is a core component of evidence-based medicine Understand features, strengths and limitations of descriptive, observational and experimental studies Distinguish between association and causation Understand roles of chance, bias and confounding in the evaluation of research

4 Course Calendar July 5: Introduction to Statistics: Core Concepts
July 12: Quality of Evidence: Considerations for Design of Experiments and Evaluation of Literature July 19: Hypothesis Testing & Application of Concepts to Common Clinical Research Questions July 26: (Cont.) Hypothesis Testing & Application of Concepts to Common Clinical Research Questions

5 “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Albert Einstein ( )

6 Vocabulary

7 Basic Concepts Statistics is a collection of procedures and principles for gathering data and analyzing information to help people make decisions when faced with uncertainty. In research, we observe something about the real world. Then we must infer details about the phenomenon that produced what we observed. A fundamental problem is that, very often, more than one phenomenon can give rise to the observations at hand!

8 Example: Infertility Suppose you are concerned about the difficulties some couples have in conceiving a child. It is thought that women exposed to a particular toxin in their workplace have greater difficulty becoming pregnant compared to women who are not exposed to the toxin. You conduct a study of such women, recording the time it takes to conceive.

9 Example: Infertility Of course, there is natural variability in time-to-pregnancy attributable to many causes aside from the toxin. Nevertheless, suppose you finally determine that those females with the greatest exposure to the toxin had the most difficulty getting pregnant.

10 Example: Infertility But what if there is a variable you did not consider that could be the cause? No study can consider every possibility.

11 Smoking Behaviors of Mother
Example: Infertility It turns out that women who smoke while they are pregnant reduce the chance their daughters will be able to conceive because the toxins involved in smoking effect the eggs in the female fetus. If you didn’t record whether or not the females had mothers who smoked when they were pregnant, you may draw the wrong conclusion about the industrial toxin. Fertility Natural Variability Smoking Behaviors of Mother Environmental Toxins

12 Example: Infertility ? ? Type I Error! Unexposed to Toxin
Majority exposed to smoke in womb Prolonged time-to-conceive found Unexposed to Toxin Majority unexposed to smoke in womb Time-to-conceive measured Lurking (Confounding) Variable → Bias ? ? Bias is the addition of systematic error to the outcome. In this example, smoking systematically increases the time-to-conceive in these women, making it appear as if being exposed to the workplace toxin is causing more harm to fertility. Type I Error!

13 Example: Infertility ? ? Type II Error! Unexposed to Toxin
Some smoking exposure An insignificant change in time-to-conceive found Unexposed to Toxin Time-to-conceive measured Lurking (Confounding) Variable → “Noise” ? ? Sometimes the confounding variable doesn’t systematic affect the outcome, it could be that it just creates noise that causes us to miss the signal. In this example, smoking systematically changes the outcome. However, the smoking behavior in both groups is approximately the same, so that systematic change is occurring at about the same degree in both groups, causing more noise in the outcome. The more ‘error’ in the outcome, the larger the ‘signal’ needs to be for us to have the power to detect it. Type II Error!

14 The Role of Statistics The conclusions (inferences) we draw always come with some amount of uncertainty due to these unobserved/unanticipated issues. We must quantify that uncertainty in order to know how “good” our conclusions are. This is the role that statistics plays in the scientific process. P-values (significance levels) Level of confidence Standard errors of estimates Confidence intervals Proper interpretation (association versus causation) Statistics not only defines what “good” is, but it tells us where our conclusions fall on that scale.

15 The Role of Statistics Scientists use statistical inference to help model the uncertainty inherent in their investigations.

16 Evidence-based Medicine
1/9/2013 Evidence-based Medicine Evidence-based practice in medicine involves gathering evidence in the form of scientific data. applying the scientific method to inform clinical practice, establishment or development of new therapies, devices, programs or policies aimed at improving health. As a trained statistician, I feel it is my duty to stress to my collaborators the importance of gathering rigorous, valid scientific evidence in their efforts, and to interpret it appropriately. By definition, statistics is the art and science of decision-making in the presence of uncertainty, so as a statistical collaborator I assist the decision-making process by quantifying the uncertainty associated with that evidence—I’m sure you’re at least marginally familiar with the p-value. I also help to maximize the value of the evidence within the constraints of the clinical scenario (e.g., budgets, patient population, etc.) through study design. I’m sure most of you have heard the term ‘evidence based medicine’ because it is not a new initiative. Evidence-based practice in medicine involves the gathering of evidence (in the form of scientific or empirical data) and then applying the scientific method to inform the treatment of patients.

17 1/9/2013 Types of Evidence Scientific evidence: “empirical evidence, gathered in accordance to the scientific method, which serves to support or counter a scientific theory or hypothesis” Type I: descriptive, epidemiological Type II: intervention-based Type III: intervention- and context-based Before we talk more about evidence-based medicine, we should first define scientific evidence. An appropriate definition for scientific evidence is “empirical (or observable) evidence gathered in accordance to the scientific method.” There are many different types of scientific evidence, but the strength of evidence is directly dependent upon the type of experimental scenario that generated it. For example, observing a single case of disease resolve when therapy is applied does not provide the same level of confidence that observing many cases would for the effectiveness of the therapy. Clinical trial evidence can be classified as type II evidence, because it provides more solid evidence than simple observation, but less than a true real-world application of the intervention within the contexts of the population and environment. The reasons for this are beyond the scope of this talk, but suffice it to say that we have difficulties replicating the real world in the clinical setting which may limit the generalizability of our results to the entire patient population.

18 Evidence-based Medicine
1/9/2013 Evidence-based Medicine Evidence-based practice results in a high likelihood of successful patient outcomes and more efficient use of health care resources. Considering interventions that are based on evidence, when actually implemented in the clinic the implication is that these types of interventions have been rigorously investigated and found to be superior in some way to other known alternatives. The final results are a higher likelihood of successful patient outcomes and more efficient use of health care resources. But how do we generate “good” evidence? Recall the definition—we need the scientific method.

19 The Scientific Method Revise Experiment Observe 1/9/2013
The scientific method is the series of steps we use to investigate phenomena—acquiring new knowledge through observation and experimentation, and continuously updating previous knowledge with it. This process is cyclical for a reason—we should be continually moving forward in our scientific endeavors, always having a series of possible paths to travel depending on the new knowledge we acquire.

20 Revise Design & Hypothesis
1/9/2013 Clinical Evaluation Revise Design & Hypothesis Run Experiment Evidence (Data) For clinical evaluation of a therapy, the scientific method looks like this—observation of cases leads to hypotheses that can be supported or modified through data (new evidence) generated through experimentation. This process usually doesn’t start with a full scale pivotal study—pivotal meaning we intend to make specific claims about the safety and efficacy of a therapy (Circulation 2009 Loscalzo ). The process needs to start small, and incrementally grow toward a large definitive study. Step 0 can be thought of as your education and experience, the knowledge you gain from your mentors and through treating patients. This observation provides you with observational evidence that could lead you to form a hypothesis—that a particular mechanism is causing or worsening disease, or that a therapy could be used in a novel way to improve patient outcomes.

21 Types of Studies Purpose of research
To explore To describe or classify To establish relationships To establish causality Strategies for accomplishing these purposes: Naturalistic observation Case study Survey Quasi-experiment Experiment Ambiguity Control

22 Complexity and Confidence
1/9/2013 Generating Evidence Studies Descriptive Studies Populations Individuals Case Reports Case Series Cross Sectional Analytic Studies Observational Case Control Cohort Experimental RCT In fact, the quality of your evidence is not only dependent upon the size of your study but also on the study design you choose and implement. For instance, the literature may consist only of a few case reports and case series. These are very simple, observational, descriptive studies on individual patients and provided very little more than anecdotal data. They were, in essence, step 0. Any of the study designs to the right of these would provide stronger evidence, with the randomized controlled trial providing the strongest scientific evidence from the design perspective. Why do they increase in confidence? One reason is the size—case reports and case series are small, and in general RCTs are larger. Another reason is directly listed—they are more complex. And the reason they are more complex is because we are being deliberate about eliminating sources of bias and noise in order to get the most reliable estimates possible. In the next several slides we will discuss a few of the more common types of study designs. Complexity and Confidence

23 Observation versus Experiment
A designed experiment involves the investigator assigning (preferably randomly) some or all conditions to subjects. An observational study includes conditions that are observed, not assigned.

24 Example: Heart Study Question: How does serum total cholesterol vary by age, gender, education, and use of blood pressure medication? Does smoking affect any of the associations? Recruit n = 3000 subjects over two years Take blood samples and have subjects answer a CVD risk factor survey Outcome: Serum total cholesterol Factors: BP meds (observed, not assigned) Confounders?

25 Example: Diabetes Question: Will a new treatment help overweight people with diabetes lose weight? N = 40 obese adults with Type II (non-insulin dependent) diabetes (20 female/20 male) Randomized, double-blind, placebo-controlled study of treatment versus placebo Outcome: Weight loss Factor: Treatment versus placebo

26 How to Talk to a Statistician?
“It’s all Greek to me . . .” Καλημέρα

27 Why Do I Need a Statistician?
Planning a study Proposal writing Data analysis and interpretation Presentation and manuscript development

28 When Should I Seek a Statistician’s Help?
Literature interpretation Defining the research questions Deciding on data collection instruments Determining appropriate study size

29 What Does the Statistician Need to Know?
General idea of the research Specific Aims and hypotheses would be ideal What has been done before Literature review! Outcomes under consideration Study population Drug/Intervention/Device Rationale for the study Budget constraints

30 “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.” Albert Einstein ( )

31 Vocabulary Hypotheses: a statement of the research question that sets forth the appropriate statistical evaluation Null hypothesis “H0”: statement of no differences or association between variables Alternative hypothesis “H1”: statement of differences or association between variables

32 Disproving the Null If someone claims that all swans are white, confirmatory evidence (in the form of lots of white swans) cannot prove the assertion to be true. Contradictory evidence (in the form of a single black swan) makes it clear the claim is invalid. "The point is that we can disprove statements, but we can not prove them. This is the principle of disconfirmation, and it forms the basis for scientific inquiry Now, knowing that we can’t prove a hypothesis but can disprove it, we take the tact of attempting to disprove the null hypothesis. If we are successful then we have, in an admittedly backwards and somewhat convoluted manner, supported our real hypothesis, the alternative hypothesis. While you can’t prove that a statement or hypothesis is true, you can disprove that its opposite is true, thereby obtaining the desired result, provided that there are no possibilities other than your hypothesis and its opposite. It is really a rather ingenious system."

33 Evidence inconsistent with H
The Scientific Method Observation Hypothesis Experiment Results Scientists make progress by using the scientific method, a process of checking conclusions against nature. After observing something, a scientist tries to explain what has been seen. The explanation is called an hypothesis. There is always at least one alternative hypothesis. A part of nature is tested in a "controlled experiment" to see if the explanation matches reality. A controlled experiment is one in which all treatments are identical except that some are exposed to the hypothetical cause and some are not. Any differences in the way the treatments behave is attributed to the presence and lack of the cause. If the results of the experiment are consistent with the hypothesis, there is evidence to support the hypothesis. If the two do not match, the scientist seeks an alternative explanation and redesigns the experiment. When enough evidence accumulates, the understanding of this natural phenomenon is considered a scientific theory. A scientific theory persists until additional evidence causes it to be revised. Nature's reality is always the final judge of a scientific theory. Revise H Evidence supports H Evidence inconsistent with H

34 Hypothesis Testing By hypothesizing that the mean response of a population is 26.3, I am saying that I expect the mean of a sample drawn from that population to be ‘close to’ 26.3:

35 Hypothesis Testing What if, in collecting data to test my hypothesis, I observe a sample mean of 26? What conclusion might I draw?

36 Hypothesis Testing What if, in collecting data to test my hypothesis, I observe a sample mean of 27.5? What conclusion might I draw?

37 Hypothesis Testing What if, in collecting data to test my hypothesis, I observe a sample mean of 30? What conclusion might I draw? ?

38 Hypothesis Testing If the observed sample mean seems odd or unlikely under the assumption that H0 is true, then we reject H0 in favor of H1. We typically use the p-value as a measure of the strength of evidence against H0.

39 What is a P-value? A p-value the probability of getting a sample mean as favorable or more favorable to H1 than what was observed, assuming H0 is true. A p-value is the area under the curve for values of the sample mean more extreme than what we observed in the sample we actually gathered. The tail of the distribution it is in is determined by H1. If H1 states that the mean is greater than 26.3, the p-value is as shown. If H1 states that the mean is less than 26.3, the p-value is the area to the left of the observed sample mean. Null distribution If H1 states that the mean is different than 26.3, the p-value is twice the area shown, accounting for the area in both tails. Observed sample mean p-value

40 Vocabulary One-tailed hypothesis: outcome is expected in a single direction (e.g., administration of experimental drug will result in a decrease in systolic BP) Two-tailed hypothesis: the direction of the effect is unknown (e.g., experimental therapy will result in a different response rate than that of current standard of care)

41 Vocabulary Type I Error (α): a true H0 is incorrectly rejected
“An innocent man is proven GUILTY in a court of law” Commonly accepted rate is α = 0.05 Type II Error (β): failing to reject a false H0 “A guilty man is proven NOT GUILTY in a court of law” Commonly accepted rate is β = 0.2 Power (1 – β): correctly rejecting a false H0 “Justice has been served” Commonly accepted rate is 1 – β = 0.8

42 Decisions Conclusion Truth H1 H0 Correct: Power Type I Error
Type II Error Correct

43 Statistical Power Primary factors that influence the power of your study: Effect size: as the magnitude of the difference you wish to find increases, the power of your study will increase Variability of the outcome measure: as the variability of your outcome decreases, the power of your study will increase Sample size: as the size of your sample increases, the power of your study will increase

44 Statistical Power Secondary factors that influence the power of your study: Dropouts Nuisance variation Confounding variables Multiple hypotheses Post-hoc hypotheses

45 Hypothesis Testing We will cover these concepts more fully when we discuss Hypothesis Testing and Quality of Evidence

46 Descriptive Statistics

47 Field of Statistics Statistics Descriptive Statistics
Methods for processing, summarizing, presenting and describing data Experimental Design Techniques for planning and conducting experiments Inferential Statistics Evaluation of the information generated by an experiment or through observation

48 Field of Statistics Statistics Descriptive Graphical Numerical
Inferential Estimation Hypothesis Testing Experimental Design

49 Field of Statistics Descriptive statistics Inferential statistics
Summarizing and describing the data Uses numerical and graphical summaries to characterize sample data Inferential statistics Uses sample data to make conclusions about a broader range of individuals—a population—than just those who are observed (a sample)

50 Field of Statistics Experimental Design Formulation of hypotheses
Determination of experimental conditions, measurements, and any extraneous conditions to be controlled Specification of the number of subjects required and the population from which they will be sampled Specification of the procedure for assigning subjects to experimental conditions Determination of the statistical analysis that will be performed

51 Descriptive Statistics
Descriptive statistics is one branch of the field of Statistics in which we use numerical and graphical summaries to describe a data set or distribution of observations. Statistics Descriptive Graphs Inferential Hypothesis Testing Interval Estimates

52 Types of Data All data contains information.
It is important to recognize that the hierarchy implied in the level of measurement of a variable has an impact on (1) how we describe the variable data and (2) what statistical methods we use to analyze it.

53 continuous quantitative
Levels of Measurement Nominal: difference Ordinal: difference, order Interval: difference, order, equivalence of intervals Ratio: difference, order, equivalence of intervals, absolute zero discrete qualitative continuous quantitative

54 Information increases
Types of Data NOMINAL ORDINAL INTERVAL RATIO Measurement – the assignment of numbers to objects or events according to a set of rules. There are four measurement scales that result from the fact that measurements may be carried out using different rules. As you move upward on the measurement scale, the amount of information in your measurement increases. The Nominal Scale – “naming observations” or classifying them into various mutually exclusive categories. (In the book, a nominal variable is referred to as a categorical variable.) male/female ID numbers (note that just because a measurement is a numerical value, it doesn’t necessarily try to quantify anything. An ID number just distinguishes us from one another—if I’m #3743, the only thing you can say about me is that I’m different from #4212.) democrat/republican/independent Categorical variables For a nominal variable, the only thing we are actually measuring is the difference between observed persons (objects, events, things, etc.). Person 1 is different from Person 2 because 1 is male, 2 is female, and the difference is of interest to the researcher. The Ordinal Scale – when the measurements can be ranked according to some criterion. Intelligence – low/average/high (Note that arithmetic operations are not meaningful) Classification in school – FR/SO/JR/SR/GR Categorical variables with inherent order Sports rankings When ‘coding’ these ranked measurements, the numerical value that is assigned only implies order among the measurements. For instance, in football, being ranked 2 doesn’t imply that you are twice as good as the 4th ranked team, only that you are better. And the difference in the 1st and 2nd ranked teams may not be the same difference in the 2nd and 3rd ranked teams—the numbers imply only that 1 is better than 2, 2 is better than 3, and so on, but do not try to quantify the ‘betterness’ itself. The Interval Scale – when the measurements can not only be ranked, but the distance between any two measurements is known, and there is no true zero point. temperature using Celsius or Fahrenheit scale – 60°F is 30°F greater than 30°F, but 60°F is not twice as hot as 30°F because 0°F does not represent an absence of heat. dates – it doesn’t make sense to say that the year 2000 is twice as old as the year 1000 because the year 0 wasn’t the beginning of time. Can be discrete or continuous Discrete variables are variables that can only take on a finite number of values. All qualitative variables are discrete. Some quantitative variables are discrete, such as performance rated as 1,2,3,4, or 5 (discrete ordinal variable), or temperature (in C) rounded to the nearest degree (discrete interval variable). Sometimes, a variable that takes on enough discrete values can be considered to be continuous for practical purposes. One example is time to the nearest millisecond. Continuous variables are variables that can take on an infinite number of possible values. For instance, temperature NOT ROUNDED (continuous interval variable), distance traveled, time elapsed (both continuous ratio variables)—so long as these are not measured in unit increments and you allow for “fraction” units to be measured (.4 of a mile, .227 of a second) the measurement is continuous. With an interval variable, you can glean more information from its measurement—you now can tell what is better, and by how much! However, ratios don’t make sense due to the lack of a ‘starting point’ on the scale. The Ratio Scale – measurements in which equality of ratios as well as equality of intervals is known. There is a true zero point; i.e. getting a value of zero for a measurement implies that there is an absence of the quantity you are measuring. height—0” is an absence of height, thus someone who is 6’ tall is twice as tall as someone standing only 3’. weight production—to produce 0 cars on a production line is to produce nothing, thus producing 12 cars on Tuesday means you’ve produced half the number of Monday, when 24 came off the line. time to complete a task – 10 minutes is twice as long as 5 minutes. Mutual exclusivity implies that at most one of the events may occur (you can’t be categorized as both democrat and republican simultaneously—you must be one, the other, or neither). Compare this to the concept of being collectively exhaustive, which means that at least one of the events must occur (you must be categorized as male or female). (Source: Wikipedia) Information increases

55 Ratio Data Ratio measurements provide the most information about an outcome. Different values imply difference in outcomes. 6 is different from 7. Order is implied. 6 is smaller than 7.

56 Ratio Data Intervals are equivalent.
The difference between 6 and 7 is the same as the difference between 101 and 102. Zero indicates a lack of what is being measured. If item A weighs 0 ounces, it weighs nothing.

57 Ratio Data Ratio measurements provide the most information about an outcome. Can make statements like: “Person A (t = 10 minutes) took twice as long to complete a task as Person B (t = 5 minutes).” This is the only type of measurement where statements of this nature can be made. Examples: age, birth weight, follow-up time, time to complete a task, dose

58 Interval Data Interval measurements are one step down on the “information” scale from ratio measurements. Difference and order are implied and intervals are equivalent. BUT, zero no longer implies an absence of the outcome. What is the interpretation of 0C? 0K? The Celsius and Fahrenheit scales of temperature are interval measurements, Kelvin is a ratio measurement.

59 Interval Data Interval measurements are one step down on the “information” scale from ratio measurements. You can tell what is better, and by how much, but ratios don’t make sense due to the lack of a ‘starting point’ on the scale. 60F is greater than 30F, but not twice as hot since 0F doesn’t represent an absence of heat. Examples: temperature, dates

60 Ordinal Data Ordinal measurements are one step down on the “information” scale from interval measurements. Difference and order are implied. BUT, intervals are no longer equivalent. For instance, the differences in performance between the 1st and 2nd ranked teams in basketball isn’t necessary equivalent to the differences between the 2nd and 3rd ranked teams. The ranking only implies that 1st is better than 2nd, 2nd is better than 3rd, and so on but it doesn’t try to quantify the ‘betterness’ itself.

61 Ordinal Data Ordinal measurements are one step down on the “information” scale from interval measurements. Examples: Highest level of education achieved, tumor grading, survey questions (e.g., likert-scale quality of life)

62 Nominal Data Nominal measurements collect the least amount of information about the outcome. Only difference is implied. Observations are classified into mutually exclusive categories. Examples: Gender, ID numbers, pass/fail response

63 Levels of Measurement It is important to recognize that the hierarchy implied in the level of measurement of a variable has an impact on (1) how we describe the variable data and (2) what statistical methods we use to analyze it. The levels are in increasing order of mathematical structure—meaning that more mathematical operations and relations are defined—and the higher levels are required in order to define some statistics.

64 Levels of Measurement At the lower levels, assumptions tend to be less restrictive and the appropriate data analysis techniques tend to be less sensitive. In general, it is desirable to have a higher level of measurement. A summary of the appropriate statistical summaries and mathematical relations or operations is given in the next table.

65 Mathematical Relation/Operation
Levels of Measurement Level Statistical Summary Mathematical Relation/Operation Nominal Mode one-to-one transformations Ordinal Median monotonic transformations Interval Mean, Standard Deviation positive linear transformations Ratio Geometric Mean, Coefficient of Variation multiplication by c  0 We must know where an outcome falls on the measurement scale--this not only determines how we describe the data (descriptive statistics) but how we analyze it (inferential statistics).

66 Using Graphs to Describe Data
Nominal and ordinal measurements are discrete and qualitative, even if they are represented numerically. Rank: 1, 2, 3 Gender: male = 1, female = 0 We typically use frequencies, percentages, and proportions to describe how the data is distributed among the levels of a qualitative variable. Bar and pie charts are even more useful.

67 Example: Myopia A survey of n = 479 children found that those who had slept with a nightlight or in a fully lit room before the age of 2 had a higher incidence of nearsightedness later in childhood. No Myopia Myopia High Myopia Total Darkness 155 (90%) 15 (9%) 2 (1%) 172 (100%) Nightlight 153 (66%) 72 (31%) 7 (3%) 232 (100%) Full Light 34 (45%) 26 (48%) 5 (7%) 75 (100%) 342 (71%) 123 (26%) 14 (3%) 479 (100%)

68 Example: Myopia High Some None

69 Example: Myopia As the amount of sleep time light increases, the incidence of myopia increases. This study does not prove that sleeping with the light causes myopia in more children. There may be some confounding factor that isn’t measured or considered-possibly genetics. Children whose parents have myopia are more likely to suffer from it themselves. It’s also possible that those parents are more likely to provide light while their children are sleeping.

70 Example: Nausea How many subjects experienced drug-related nausea?

71 Example: Nausea With unequal sample sizes across doses, it is more meaningful to use percent rather than frequency.

72 Bar & Pie Charts

73 Using Graphs to Describe Data
Interval and Ratio variables are continuous and quantitative and can be graphically and numerically represented with more sophisticated mathematical techniques. Height Survival Time We typically use means, standard deviations, medians, and ranges to describe how the variables tend to behave. Histograms and boxplots are even more useful.

74 Example: Time-to-death
Suppose that we record the variable x = time-to-death of n = 100 patients in a study.

75 Example: Time-to-death
We can quickly observe several characteristics of the data from the histogram: For most subjects, death occurred between 0 and 5 months For a few subjects, death occurred past 15 months From this picture, we may wish to identify the distinguishing characteristics of the individuals with unusually long times.

76 Example: Weight Suppose we record the weight in pounds of n = 100 subjects in a study.

77 Example: Tooth Growth Boxplots represent the same information, but are more useful for comparing characteristics between several data sets. Right: distributions of tooth growth for two supplements and three dose levels

78 Using Numbers to Describe Data
Nominal and ordinal measurements are discrete and qualitative, even if they are represented numerically. Rank: 1, 2, 3 Gender: male = 1, female = 0 Interval and Ratio variables are continuous and quantitative and can be graphically and numerically represented with more sophisticated mathematical techniques. Height Survival Time

79 Using Numbers to Describe Data
Nominal and ordinal measurements are qualitative, even if they are represented numerically. We typically describe qualitative data using frequencies and percentages in tables. Measures of central tendency and variability don’t make as much sense with categorical data, though the mode can be reported.

80 Describing Data Interval and ratio measurements are quantitative. When dealing with a quantitative measurements, we typically describe three aspects of its distribution. Central tendency: a single value around which data tends to fall. Variability: a value that represents how scattered the data is around that central value--large values are indicative of high scatter. We also want to describe the shape of the distribution of the sample data values.

81 Central Tendency Mean: arithmetic average of data
Median: approximate middle of data Mode: most frequently occurring value location

82 Central Tendency Mode, Mo
The most frequently occurring value in the data set. May not exist or may not be uniquely defined. It is the only measure of central tendency that can be used with nominal variables, but it is also meaningful for quantitative variables that are inherently discrete (e.g., performance of a task). Its sampling stability is very low (i.e., it varies greatly from sample to sample).

83 Central Tendency: Mode

84 Central Tendency: Mode

85 Central Tendency Median, M
The middle value (Q2, the 50th percentile) of the variable. It is appropriate for ordinal measures and for skewed interval or ratio measures because it isn’t affected by extreme values. It’s unaffected (robust to outliers) because it takes into account only the relative ordering and number of observations, not the magnitude of the observations themselves. It has low sampling stability.

86 Example: Median Suppose we have a set of observations: 1 2 2 4
The median for this set is M = 2. Now suppose we accidentally mismeasured the last observation: The median for this new set is still M = 2.

87 Central Tendency: Median
Mo M

88 Central Tendency Mean, The arithmetic average of the variable x.
It is the preferred measure for interval or ratio variables with relatively symmetric observations. It has good sampling stability (e.g., it varies the least from sample to sample), implying that it is better suited for making inferences about population parameters. It is affected by extreme values because it takes into account the magnitude of every observation. It can be thought of as the center of gravity of the variable’s distribution.

89 Example: Mean Suppose we have a set of observations: 1 2 2 4
The median for this set is M = 2, the mean is Now suppose we accidentally mismeasured the last observation: The median for this new set is still M = 2, but the new mean is

90 Central Tendency: Median
Mo M

91 Variability Range: difference between min and max values
Standard deviation: measures the spread of data about the mean, measured in the same units as the data spread

92 Variability Measures of variability depict how similar observations of a variable tend to be. Variability of a nominal or ordinal variable is rarely summarized numerically. The more familiar measures of variability are mathematical, requiring measurement to be of the interval or ratio scale.

93 Variability Range, R The distance from the minimum to the maximum observation. Easy to calculate. Influenced by extreme values (outliers).  R = = 9  R = = 99

94 Variability Interquartile Range, IQR
The distance from the 1st quartile (25th percentile) to the 3rd quartile (75th percentile), Q3 - Q1. Unlike the range, IQR is not influenced by extreme values.

95 Variability: IQR

96 Variability Standard deviation, s
Represents the average spread of the data around the mean. Expressed in the same units as the data. “Average deviation” from the mean.

97 Variability Variance, s2 The standard deviation squared.
“Average squared deviation” from the mean.

98 Shape shape

99 Distribution Shapes

100 Summary Basic Concepts Descriptive Statistics
Definition and role of statistics Vocabulary lesson Brief introduction to Hypothesis Testing Brief introduction to Design concepts Descriptive Statistics Levels of Measurement Graphical summaries Numerical summaries Next time: Study Design Considerations and Quality of Evidence


Download ppt "KUMC Departments of Biostatistics & Internal Medicine"

Similar presentations


Ads by Google