Presentation is loading. Please wait.

Presentation is loading. Please wait.

Medical statistics for cardiovascular disease Part 1

Similar presentations


Presentation on theme: "Medical statistics for cardiovascular disease Part 1"— Presentation transcript:

1 Medical statistics for cardiovascular disease Part 1
Giuseppe Biondi-Zoccai, MD Sapienza University of Rome, Latina, Italy

2 Learning milestones Key concepts Bivariate analysis
Complex bivariate analysis Multivariable analysis Specific advanced methods

3 Why do you need to know statistics?
CLINICIAN RESEARCHER

4 A collection of methods

5 The EBM 3-step approach How an article should be appraised, in 3 steps: Step 1 – Are the results of the study (internally) valid? Step 2 – What are the results? Step 3 – How can I apply these results to patient care? Guyatt and Rennie, Users’ guide to the medical literature, 2002

6 The Cochrane Collaboration Risk of Bias Tool

7 The ultimate goal of any clinical or scientific observation is the appraisal of causality

8 *statistics is important here
Bradford Hill causality criteria Force:* precisely defined (p<0.05, weaker criterion) and with strong relative risk (≤0.83 or ≥1.20) in the absence of multiplicity issues (stronger criterion) Consistency:* results in favor of the association must be confirmed in other studies Temporality: exposition must precede in a realistic fashion the event Coherence: hypothetical cause-effect relationship is not in contrast with other biologic or natural history findings *statistics is important here Mente et al, Arch Intern Med 2009

9 *statistics is important here
Bradford Hill causality criteria Biologic gradient:* exposition dose and risk of disease are positively (or negatively) associated on a continuum Experimental: experimental evidence from laboratory studies (weaker criterion) or randomized clinical trials (stronger criterion) Specificity: exposition is associated with a single disease (does not apply to multifactorial conditions) Plausibility: hypothetical cause-effect relationship makes sense from a biologic or clinical perspective (weaker criterion) Analogy: hypothetical cause-effect relationship is based on analogic reasoning (weaker criterion) *statistics is important here Mente et al, Arch Intern Med 2009

10 Randomization Is the technique which defines experimental studies in humans (but not only in them), and enables the correct application of statistical tests of hypothesis in a frequentist framework (according to Ronal Fischer theory) Randomization means assigning at random a patient (or a study unit) to one of the treatments Over large numbers, randomization minimizes the risk of imbalances in patient or procedural features, but this does not hold true for small samples and for a large set of features

11 Any clinical or scientific comparison can be viewed as…
A battle between an underlying hypothesis (null, H0), stating that there is no meaningful difference or association (beyond random variability) between 2 or more populations of interest (from which we are sampling) and an alternative hypothesis (H1), which implies that there is a non-random difference between such populations. Any statistical test is a test trying to convince us that H0 is false (thus implying the working truthfulness of H1).

12 Falsifiability Falsifiability or refutability of a statement, hypothesis, or theory is an inherent possibility to prove it to be false. A statement is called falsifiable if it is possible to conceive an observation or an argument which proves the statement in question to be false. In this sense, falsify is synonymous with nullify, meaning not "to commit fraud" but "show to be false

13 Statistical or clinical significance?
Statistical and clinical significance are 2 very different concepts. A clinically significant difference, if demostrated beyond the play of chance, is clinically relevant and thus merits subsequent action (if costs and tolerability issues are not overcoming). A statistically significant difference is a probabilistic concept and should be viewed in light of the distance from the null hypothesis and the chosen significance threshold.

14 Descriptive statistics
100 100 AVERAGE

15 Inferential statistics
If I become a scaffolder, how likely I am to eat well every day? Confidence Intervals P values

16 Samples and populations
This is a sample

17 Samples and populations
And this is its universal population

18 Samples and populations
This is another sample

19 Samples and populations
And this might be its universal population

20 Samples and populations
But what if THIS is its universal population?

21 depend on our confidence
Samples and populations Any inference thus depend on our confidence in its likelihood

22 Alpha and type I error Whenever I perform a test, there is thus a risk of a FALSE POSITIVE result, ie REJECTING A TRUE null hypothesis. This error is called type I, is measured as alpha and its unit is the p value. The lower the p value, the lower the risk of falling into a type I error (ie the HIGHER the SPECIFICITY of the test).

23 Because I see something
Alpha and type I error Type I error is like a MIRAGE Because I see something that does NOT exist

24 Beta and type II error Whenever I perform a test, there is also a risk of a FALSE NEGATIVE result, ie NOT REJECTING A FALSE null hypothesis. This error is called type II, is measured as beta, and its unit is a probability. The complementary of beta is called power. The lower the beta, the lower the risk of missing a true difference (ie the HIGHER the SENSITIVITY of the test).

25 Because I do NOT see something that exists
Beta and type II error Type II error is like being BLIND Because I do NOT see something that exists

26 Accuracy and precision
true value measurement distance spread Accuracy measures the distance from the true value Precision measures the spead in the measurements

27 Accuracy and precision test

28 Accuracy and precision
Thus: Precision expresses the extent of RANDOM ERROR Accuracy expresses the extent of SYSTEMATIC ERROR (ie bias)

29 Validity Internal validity entails both PRECISION and ACCURACY (ie does a study provide a truthful answer to the research question?) External validity expresses the extent to which the results can be applied to other contexts and settings. It corresponds to the distinction between SAMPLE and POPULATION)

30 Intention-to-treat analysis
Intention-to-treat (ITT) analysis is an analysis based on the initial treatment intent, irrespectively of the treatment eventually administered. ITT analysis is intended to avoid various types of bias that can arise in intervention research, especially procedural, compliance and survivor bias. However, ITT dilutes the power to achieve statistically and clinically significant differences, especially as drop-in and drop-out rates rise.

31 Per-protocol analysis
In contrast to the ITT analysis, the per-protocol (PP) analysis includes only those patients who complete the entire clinical trial or other particular procedure(s), or have complete data. In PP analysis each patient is categorized according to the actual treatment received, and not according to the originally intended treatment assignment. PP analysis is largely prone to bias, and is useful almost only in equivalence or non-inferiority studies.

32 ITT vs PP 45 pts treated with A, 5 shifted to B because of poor global health (all 5 died) 50 pts to group A (more toxic) 100 pts enrolled RANDOMIZATION ACTUAL THERAPY 50 pts to group B (conventional Rx, less toxic) 50 patients treated with B (none died)

33 ITT vs PP 45 pts treated with A, 5 shifted to B because of poor global health (all 5 died) 50 pts to group A (more toxic) 100 pts enrolled RANDOMIZATION ACTUAL THERAPY 50 pts to group B (conventional Rx, less toxic) 50 patients treated with B (none died) ITT: 10% mortality in group A vs 0% in group B, p=0.021 in favor of B

34 ITT vs PP 45 pts treated with A, 5 shifted to B because of poor global health (all 5 died) 50 pts to group A (more toxic) 100 pts enrolled RANDOMIZATION ACTUAL THERAPY 50 pts to group B (conventional Rx, less toxic) 50 patients treated with B (none died) ITT: 10% mortality in group A vs 0% in group B, p=0.021 in favor of B PP: 0% (0/45) mortality in group A vs 9.1% (5/55) in group B, p=0.038 in favor of A

35 Mean (arithmetic) Characteristics: -summarises information well
-discards a lot of information (dispersion??) Assumptions: -data are not skewed distorts the mean outliers make the mean very different -Measured on measurement scale cannot find mean of a categorical measure ‘average’ stent diameter may be meaningless

36 Median What is it? The one in the middle Place values in order
Median is central Definition: Equally distant from all other values Used for: Ordinal data Skewed data / outliers

37 Standard deviation S = 1 ) ( - N x SD Standard deviation (SD):
approximates population σ as N increases Advantages: with mean enables powerful synthesis mean±1*SD 68% of data mean±2*SD 95% of data (1.96) mean±3*SD 99% of data (2.86) Disadvantages: is based on normal assumptions 1 ) ( 2 - S = N x SD Variance

38 Interquartile range 1st-3rd Quartile =16.5; 23.5 Interquartile Range
Variable type Continuous Patient ID Lesion Length 11 14 6 15 7 16 3 17 1 18 8 10 19 9 21 12 22 5 23 2 24 4 25 13 27 25% to 75% percentile or 1° to 3° quartile 16.5 1st-3rd Quartile =16.5; 23.5 Interquartile Range = =7.0 Median 23.5

39 Coefficient of variation
Standard deviation Mean CV = x 100 Coefficient of variation (CV) is a index of relative variability CV is dimensionless CV enables you to compare data dispersion of variables with different units of measurement

40 Learning milestones Key concepts Bivariate analysis
Complex bivariate analysis Multivariable analysis Specific advanced methods

41 Point estimation & confidence intervals
Using summary statistics (mean and standard deviation for normal variables, or proportion for categorical variable) and factoring sample size, we can build confidence intervals or test hypotheses that we are sampling from a given population or not This can be done by creating a powerful tool, which weighs our dispersion measures by means of the sample size: the standard error

42 First you need the SE We can easily build the standard error of a proportion, according to the following formula: Where variance=P*(1-P) and n is the sample size P * (1-P) SE = n

43 Point estimation & confidence intervals
We can then create a simple test to check whether the summary estimate we have found can be compatible according to random variation with the corresponding reference population mean The Z test (when the population SD is known) and the t test (when the population SD is only estimated), are thus used, and both can be viewed as a signal to noise ratio

44 Signal to noise ratio Signal Signal to noise ratio = Noise

45 From the Z test… Signal Signal to noise ratio Noise =
Absolute difference in summary estimates Z score = Standard error Results of z score correspond to a distinct tail probability of the Gaussian curve (eg 1.96 corresponds to a one-tailed probability or two-tailed probability)

46 …to confidence intervals
Standard error (SE or SEM) can be used to test a hypothesis or create a confidence interval (CI) around a mean for a continuous variable (eg mortality rate) 95% CI = mean ± 2 SE SD SE = n 95% means that, if we repeat the study 20 times, 19 times out of 20 we will included the true population average

47 Ps and confidence intervals
P values and confidence intervals are strictly connected Any hypothesis test providing a significant result (eg p=0.045) means that we can be confident at 95.5% that the population average difference lies far from zero (ie the null hypothesis)

48 P values and confidence intervals
Important difference Trivial difference Ho significant difference (p<0.05) non significant difference (p>0.05)

49 Power and sample size Whenever designing a study or analyzing a dataset, it is important to estimate the sample size or the power of the comparison. SAMPLE SIZE Setting a specific alpha and a specific beta, you calculate the necessary sample size given the average inter-group difference and its variation. POWER Given a specific sample size and alpha, in light of the calculated average inter-group difference and its variation, you obtain an estimate of the power (ie 1-beta).

50 Hierachy of analysis A statistical analysis:
Univariate (e.g. when describing mean or standard deviation) Bivariate (e.g. when comparing age in men adn women) Multivariable (e.g. when appraising how age and gender impact on the risk of death) Multivariate (e.g. when appraising how age and gender simultaneously impact on risk of death and hospital costs)

51 Types of variables Variables eg eg blood pressure measured
PAIRED OR REPEATED MEASURES UNPAIRED OR INDEPENDENT MEASURES eg blood pressure measured twice in the same patients at different times eg blood pressure measured in several different groups of patients only once

52 Types of variables Variables CATEGORY QUANTITY nominal ordinal
discrete continuous ordered categories ranks counting measuring 52

53 Statistical tests Categorical data: compare proportions in groups
Are data categorical or continuous? Categorical data: compare proportions in groups Continuous data: compare means or medians in groups How many groups? Two groups; normal data? More than two groups; normal data? Non-normal data; use Mann Whitney U test Non-normal data; use Kruskal Wallis Normal data; use ANOVA Normal data; use t test

54 Student t test It is used to test the null hypothesis that the means of two normally distributed populations are equal Given two data sets (each with its mean, SD and number of data points) the t test determines whether the means are distinct, provided that the underlying distributions can be assumed to be normal the Student t test should be used if the variances (not known) of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t test

55 Mann Whitney rank sum U test
B

56 Paired Student t test 55.1% (7.4) 48.7% (8.3) Only 11 patients !!! EF at baseline and FU in patients treated with BMC for MI Significant increase in EF by paired t test P=0.005 MAGIC, Lancet 2004

57 Wilcoxon signed rank test

58 1-way ANOVA As with the t-test, ANOVA is appropriate when the data are continuous, when the groups are assumed to have similar variances, and when the data are normally distributed ANOVA is based upon a comparison of variance attributable to the independent variable (variability between groups or conditions) relative to the variance within groups resulting from random chance. In fact, the formula involves dividing the between-group variance estimate by the within-group variance estimate

59 Post-hoc test

60 Kruskal Wallis test Post-hoc analysis with Mann Withney U and
Bonferroni correction

61 Compare continuous variables
Three (or more) paired groups Again ask yourself… Parametric or not? If parametric: ANOVA for repeated measures in SPSS… in the General Linear Model If non-parametric: Friedman test

62 Friedman test

63 2-way ANOVA A mixed-design ANOVA is used to test for differences between independent groups whilst subjecting participants to repeated measures. In a mixed-design ANOVA model, one factor is a between-subjects variable (drug) and the other is within-subjects variable (BP) CAMELOT, JAMA 2004

64 Is the percentage of diabetics in this sample
Binomial test Is the percentage of diabetics in this sample comparable with the known chronic AF population? We assume the population rate is at 15%

65 Compare discrete variables
The second basis is the “observed”-“expected” relation

66 Compare event rates Absolute Risk (AR) 7.9% (47/592) & 15.1% (89/591)
Absolute Risk Reduction (ARR) 7.9% (47/592) – 15.1% (89/591) = -7.2% Relative Risk (RR) 7.9% (47/592) / 15.1% (89/591) = 0.52 (given an equivalence value of 1) Relative Risk Reduction (RRR) 1 – 0.52 = 0.48 or 48% Odds Ratio (OR) 8.6% (47/545) / 17.7% (89/502) = 0.49 (given an equivalence value of 1)

67 Post-hoc groups This is a sub-group ! Bonferroni !
the chi-square test was used to determine differences between groups with respect to the primary and secondary end points. Odds ratios and their 95 percent confidence intervals were calculated. Comparisons of patient characteristics and survival outcomes were tested with the chi-square test, the chi-square test for trend, Fisher's exact test, or Student's t-test, as appropriate. This is a sub-group ! Bonferroni ! The level of significant p-value should be divided by the number of tests performed… Or the computed p-value, multiplied for the number of tests… P=0.12 and not P=0.04 !! Wenzel et al, NEJM 2004

68 Fisher Exact test Exp Ctrl r1 Event a b No event c d r2 s1 s2 N
s1! * s2! * r1! * r2! P = N! * a! * b! * c! * d!

69 Baseline asymptomatic Baseline asymptomatic
McNemar test The McNemar test is a hypothesis test to compare categorical variables in related samples. For instance, how can I appraise the statistical significance of change into symptom status (asymptomatic vs symptomatic) in the same patients over time? The McNemar test exploits the discordant pairs to generate a p value Follow-up symptomatic asymptomatic Baseline symptomatic 15 3 Baseline asymptomatic 5 17 Follow-up symptomatic asymptomatic Baseline symptomatic 15 Baseline asymptomatic 8 17 p=0.72 at McNemar test p=0.013 at McNemar test

70 Take home messages

71 Take home messages Biostatistics is best seen as a set of different tools and methods which are used according the problem at hand. Nobody is experienced in statistics at beginning, and only by facing everyday real-world problems you can familiarize yourself with different techniques and approaches. In general terms, it is also crucial to remember that the easiest and simplest way to solve a statistical problem, if appropriate, is also the best and the one recommended by reviewers.

72 Many thanks for your attention!
For any query: For these slides and similar slides: For similar slides:

73 Medical statistics for cardiovascular disease Part 2
Giuseppe Biondi-Zoccai, MD Sapienza University of Rome, Latina, Italy

74 Learning milestones Key concepts Bivariate analysis
Complex bivariate analysis Multivariable analysis Specific advanced methods

75 Linear regression Which of these different possible lines that I can graphically trace and compute is the best regression line? It can be intuitively understood that it is the line that minimizes the differences between observed values (yi) and estimated values (yi’)

76 Correlation The square root of the coefficient of determination (R2) is the correlation coefficient (R) and shows the degree of linear association between 2 continuous variables, but disregards causation. K. Pearson Assumes values between -1.0 (negative association), 0 (no association), and +1.0 (positive association). It can be summarized as a point summary estimate, with specific standard error, 95% confidence interval, and p value.

77 Dangers of not plotting data
4 sets of data: all with the same R=0.81!* *At linear regression analysis

78 What about non-linear associations?
Each number correspond to the correlation coefficient for linear association (R)!!!

79 Pearson vs Spearman Whenever the independent and dependent variables can be assumed to belong to normal distributions, the Pearson linear correlation method can be used, maximizing statistical power and yield. Whenever the data are sparse, rare, and/or not belonging to normal distributions, the non-parametric Spearman correlation method should be used, which yields the rank correlation coefficient (rho), but not its R2. C. Spearman

80 Bland Altman plot Difference of A – B in each case
Mean of measurement A and B in each case

81 Regression to the mean: don’t bet on past rookies of the year!

82 Ecological fallacy

83 Ecological fallacy

84 Logistic regression We model ln [p/(1-p)] instead of just p, and the linear model is written : ln [p/(1-p)] = ln(p) – ln(1-p) = β0 + β1*X Logistic regression is based on the logit which transforms a dichotomous dependent variable into a continuous one

85 Generalized Linear Models
All generalized linear models have three components : Random component identifies the response variable and assumes a probability distribution for it Systematic component specifies the explanatory variables used as predictors in the model (linear predictor). Link describes the functional relationship between the systematic component and the expected value (mean) of the random component. The GLM relates a function of that mean to the explanatory variables through a prediction equation having linear form. The model formula states that: g(µ) = α + β1x1 + … + βkxk

86 Generalized Linear Models
Through differing link functions, GLM corresponds to other well known models Distribution Name Link Function Mean Function Normal Identity Exponential Inverse Gamma Inverse Gaussian Inverse squared Poisson Log Binomial Logit

87 Survival analysis Patients experiencing one or more events are called responders Patients who, at the end of the observational period or before such time, get out of the study without having experienced any event, are called censored

88 Survival analysis A B C D E F x Study end A B C D E F x Study end
x Withdrawn Study end Lost A B C D E F x Withdrawn Study end Lost A and F: events B, C, D and E: censored 88

89 Product limit (Kaplan-Meier) analysis

90 Kaplan-Meier curves and SE
Serruys et al, NEJM 2010

91 Learning milestones Key concepts Bivariate analysis
Complex bivariate analysis Multivariable analysis Specific advanced methods

92 Multivariable statistical methods
Goal is to explain the variation in the dependent variable by other variables simultaneously. Independent and dependent variables Independent Dependent Predictors Regressors Explanatory Prognostic Factors Manipulated Response Dep. Var.: is influenced by … Ind. Var. : have an effect on ...

93 Bivariate statistical methods
One Dep. Var. ~ One Ind. Var. I.V. Qualitative Chi² D. V. Qualitative Quantitative Logistic Reg. Quantitative Qualitative I.V. Anova 1 Quantitative Simple Regression

94 Multivariable statistical methods
One Dep. Var. ~ Several Ind. Var. I.V. Qualitative Chi² D. V. Qualitative Quantitative Logistic Reg. Quantitative Qualitative I.V. Anova 1 Quantitative Simple Regression

95 Multivariable analysis
The methods mentioned have specific application domains depending on the nature of the variables involved in the analysis. But conceptually and qua calculation there are a lot of similarities between these techniques. Each of the multivariable methods evaluates the effect of an independent variable on the dependent variable, controlling for the effect of other independent variables. Methods such as multiple regression, multi-factor ANOVA, analysis of covariance have the same assumptions towards the distribution of the dependent variable. We will learn more about the concepts of multivariable analysis by reviewing the simple linear regression model.

96 Multiple linear regression
Simple linear regression is a statistical model to predict the value of one continuous variable Y (dependent, response) from another continuous variable X (independent, predictor, covariate, prognostic factor). Multiple linear regression is a natural extension of the simple linear regression model We use it to investigate the effect on the response variable of several predictor variables, simultaneously It is a hypothetical model of the relationship between several independent variables and a response variable. Let’s start by reviewing the concepts of the simple linear regression model.

97 Multiple regression models
Model terms may be divided into the following categories Constant term Linear terms / main effects (e.g. X1) Interaction terms (e.g. X1X2) Quadratic terms (e.g. X12) Cubic terms (e.g. X13) Models are usually described by the highest term present Linear models have only linear terms Interaction models have linear and interaction terms Quadratic models have linear, quadratic and first order interaction terms Cubic models have terms up to third order.

98 The model-building process
Source: Applied Linear Statistical Models, Neter, Kutner, Nachtsheim, Wasserman

99 AIC and BIC AIC = 2k + n [ln (SSError / n)]
AIC (Akaike Information Criterion) and BIC (Swarz Information Criterion) are two popular model selection methods. They not only reward goodness of fit, but also include a penalty that is an increasing function of the number of estimated parameters. This penalty discourages overfitting. The preferred model is the one with the lowest value for AIC or for BIC. These criteria attempt to find the model that best explains the data with a minimum of free parameters. The AIC penalizes free parameters less strongly than does the Schwarz criterion. AIC = 2k + n [ln (SSError / n)] BIC = n ln (SSError / n) + k ln(n)                                

100 Two-Factor ANOVA Introduction
A method for simultaneously analyzing two factors affecting a response. Group effect: treatment group or dose level Blocking factor whose variation can be separated from the error variation to give more precise group comparisons: study center, gender, disease severity, diagnostic group, … One of the most common ANOVA methods used in clinical trial analysis. Similar assumptions as for single-factor anova. Non-parametric alternative : Friedman test

101 Two-Factor ANOVA The Model
Effect of treatment factor (a levels or i columns ) Response score of subject k in column i and row j Interaction effect Overall Mean Effect of blocking factor (b levels or j rows) Error or Effect of not measured variables

102 Analysis of Covariance ANCOVA
Method for comparing response means among two or more groups adjusted for a quantitative concomitant variable, or “covariate”, thought to influence the response. The response variable is explained by independent quantitative variable(s) and qualitative variable(s). Combination of ANOVA and regression. Increases the precision of comparison of the group means by decreasing the error variance. Widely used in clinical trials

103 Analysis of Covariance The model
The covariance model for a single-factor with fixed levels adds another term to the ANOVA model, reflecting the relationship between the response variable and the concomitant variable. The concomitant variable is centered around the mean so that the constant µ represents the overall mean in the model.

104 Repeated-Measures Basic concepts
‘Repeated-measures’ are measurements taken from the same subject (patient) at repeated time intervals. Many clinical studies require: multiple visits during the trial response measurements made at each visit A repeated measures study may involve several treatments or only a single treatment. ‘Repeated-measures’ are used to characterize a response profile over time. Main research question: Is the mean response profile for one treatment group the same as for another treatment group or a placebo group ? Comparison of response profiles can be tested with a single F-test.

105 Repeated-Measures Comparing profiles
Source: Common Statistical Methods for Clinical Research, 1997, Glenn A. Walker

106 Repeated Measures ANOVA Random Effects – Mixed Model
What are your conclusions about the between subjects species effect and the within subjects season effect ? Prediction Formula

107 Repeated Measures ANOVA Correlated Measurements – Multivariate Model
Response Profiles Multi-variate F-tests

108 Logistic regression Sangiorgi et al, AHJ 2008

109 Multiple Regression SPSS Variable Selection Methods
Enter. A procedure for variable selection in which all variables in a block are entered in a single step. Forward Selection (Likelihood Ratio). Stepwise selection method with entry testing based on the significance of the score statistic, and removal testing based on the probability of a likelihood-ratio statistic based on the maximum partial likelihood estimates. Backward Elimination (Likelihood Ratio). Backward stepwise selection. Removal testing is based on the probability of the likelihood-ratio statistic based on the maximum partial likelihood estimates.

110 Cox PH analysis Problem
Can’t use ordinary linear regression because how do we account for the censored data? Can’t use logistic regression without ignoring the time component with a continuous outcome variable we use linear regression with a dichotomous (binary) outcome variable we use logistic regression where the time to an event is the outcome of interest, Cox regression is the most popular regression technique

111 Cox PH analysis Cosgrave et al, AJC 2005

112 Harell C index

113 Learning milestones Key concepts Bivariate analysis
Complex bivariate analysis Multivariable analysis Specific advanced methods

114 Question: When there are many confounding covariates needed to adjust for:
Matching: based on many covariates is not practical Stratification: is difficult, as the number of covariates increases, the number of strata grows exponentially: 1 covariate: 2 strata  5 covariates: 32 (25) strata Regression adjustment: may not be possible: potential problem: over-fitting

115 Propensity score Replace the collection of confounding covariates with one scalar function of these covariates Age Gender Ejection fraction Risk factors Lesion characteristics 1 composite covariate: Propensity Score Balancing score

116 No comparison possible…
Comparability No comparison possible…

117 Compare treatments with propensity score
Three common methods of using the propensity score to adjust results: Matching Stratification Regression adjustment

118 Goal of a clinical trial is appraisal of…
Superiority: difference in biologic effect or clinical effect Equivalence: lack of meaningful/clinically relevant difference in biologic effect or clinical effect Non-inferiority: lack of meaningful/clinically relevant increase in adverse clinical event

119 Superiority RCT Possibly greatest medical invention ever
Randomization of adequate number of subjects ensures prognostically similar groups at study beginning If thorough blinding is enforced, even later on groups maintain similar prognosis (except for effect of experiment) Sloppiness/cross-over makes arm more similar -> traditional treatment is not discarded Per-protocol analysis almost always misleading

120 Equivalence/non-inferiority RCT
Completely different paradigm Goal is to conclude new treatment is not “meaningfully worse” than comparator Requires a subjective margin Sloppiness/cross-over makes arm more similar -> traditional treatment is more likely to be discarded Per-protocol analysis possibly useful to analyze safety, but bulk of analysis still based on intention-to-treat principle

121 Superiority, equivalence or non-inferiority?
Vassiliades et al, JACC 2005

122 Possible outcomes in a non-inferiority trial
(observed difference & 95% CI)  New Treatment Better New Treatment Worse 

123 Typical non-inferiority design
Hiro et al, JACC 2009

124 Cumulative meta-analysis
Antman et al, JAMA 1992

125 Meta-analysis of intervention studies
De Luca et al, EHJ 2009

126 Funnel plot

127 Indirect and network meta-analyses
Direct plus indirect (i.e. network) Jansen et al, ISPOR 2008

128 Resampling Resampling refers to the use of the observed data or of a data generating mechanism (such as a die or computer-based simulation) to produce new hypothetical samples, the results of which can then be analyzed. The term computer-intensive methods also is frequently used to refer to techniques such as these…

129 Bootstrap The bootstrap is a modern, computer-intensive, general purpose approach to statistical inference, falling within a broader class of resampling methods. Bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution of the observed data.

130 Jacknife Jacknifing is a resampling method based on the creation of several subsamples by excluding a single case at the time. Thus, the are only N jacknife samples for any given original sample with N cases. After the systematic recomputation of the statistic estimate of choice is completed, an point estimate and an estimate for the variance of the statistic can be calculated.

131 The Bayes theorem

132 The Bayes theorem The main feature of Bayesian statistics is that it takes into account prior knowledge of the hypothesis

133 Bayes theorem P (H | D) P (D | H) * P (H) _____________ P (H | D) =
Prior (or marginal) probability of hypothesis Likelihood of hypothesis (or conditional probability of B) P (H | D) P (D | H) * P (H) _____________ P (D) P (H | D) = Posterior (or conditional) probability of hypothesis H Probability of the data (prior or marginal probability of B: normalizing constant) Thus it relates the conditional and marginal probabilities of two random events and it is often used to compute posterior probabilities given observations.

134 Frequentists vs Bayesians

135 “Classical” statistical inference vs Bayesians inference

136 Before the next module, a question for you: who is a Bayesian?
A Bayesian is who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule

137 Before the next module, a question for you: who is a Bayesian?
A Bayesian is who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule

138 Before the next module, a question for you: who is a Bayesian?
A Bayesian is who, vaguely expecting a horse, and catching a glimpse of a donkey, strongly believes he has seen a mule

139 JMP Statistical Discovery Software
JMP is a software package that was first developed by John Sall, co-founder of SAS, to perform simple and complex statistical analyses. It dynamically links statistics with graphics to interactively explore, understand, and visualize data. This allows you to click on any point in a graph, and see the corresponding data point highlighted in the data table, and other graphs. JMP provides a comprehensive set of statistical tools as well as design of experiments and statistical quality control in a single package. JMP allows for custom programming and script development via JSL, originally know as "John's Scripting Language“. An add-on JMP Genomics comes with over 100 analytic procedures to facilitate the treatment of data involving genetics, microarrays or proteomics. Pros: very intuitive, lean package for design and analysis in research Cons: less complete and less flexible than the complete SAS system Price: €€€€.

140 R R is a programming language and software environment for statistical computing and graphics, and it is an implementation of the S programming language with lexical scoping semantics. R is widely used for statistical software development and data analysis. Its source code is freely available under the GNU General Public License, and pre-compiled binary versions are provided for various operating systems. R uses a command line interface, though several graphical user interfaces are available. Pro: flexibility and programming capabilities (eg for bootstrap), sophisticated graphical capabilities. Cons: complex and user-unfriendly interface. Price: free.

141 S and S-Plus S-PLUS is a commercial package sold by TIBCO Software Inc. with a focus on exploratory data analysis, graphics and statistical modeling It is an implementation of the S programming language. It features object-oriented programming capabilities and advanced analytical algorithms (eg for robust regression, repeated measurements, …) Pros: flexibility and programming capabilities (eg for bootstrap), user-friendly graphical user interface Cons: complex matrix programming environment Price: €€€€-€€.

142 SAS SAS (originally Statistical Analysis System, 1968) is an integrated suite of platform independent software modules provided by SAS Institute (1976, Jim Goodnight and Co). The functionality of the system is very complete and built around four major tasks: data access, data management, data analysis and data presentation. Applications of the SAS system include: statistical analysis, data mining, forecasting; report writing and graphics; operations research and quality improvement; applications development; data warehousing (extract, transform, load). Pros: very complete tool for data analysis, flexibility and programming capabilities (eg for Bayesian, bootstrap, conditional, or meta-analyses), large volumes of data Cons: complex programming environment, labyrinth of modules and interfaces, very expensive Price: €€€€-€€€€

143 Statistica STATISTICA is a powerful statistics and analytics software package developed by StatSoft, Inc. Provides a wide selection of data analysis, data management, data mining, and data visualization procedures. Features of the software include basic and multivariate statistical analysis, quality control modules and a collection of data mining techniques. Pros: extensive range of methods, user-friendly graphical interface, has been called “the king of graphics” Cons: limited flexibility and programming capabilities, labyrinth Price: €€€€.

144 SPSS SPSS (originally, Statistical Package for the Social Sciences) is a computer program used for statistical analysis released in its first version in 1968 and now distributed by IBM. SPSS is among the most widely used programs for statistical analysis in social science. It is used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations and others. Pros: extensive range of tests and procedures, user-friendly graphical interface. Cons: limited flexibility and programming capabilities. Price: €€€€.

145 Stata Stata (name formed by blending "statistics" and "data“) is a general-purpose statistical software package created in 1985 by StataCorp. Stata's full range of capabilities includes: data management, statistical analysis, graphics generation, simulations, custom programming. Most meta-analyses tools were first developed for Stata, and thus this package offers one of the most extensive library of statistical tools for systematic reviewers Pros: flexibility and programming capabilities (eg for bootstrap, or meta-analyses), sophisticated graphical capabilities Cons: relatively complex interface Price: €€-€€€

146 WinBUGS and OpenBUGS WinBUGS (Windows-based Bayesian inference Using Gibbs Sampling) is a statistical software for the Bayesian analysis of complex statistical models using Markov chain Monte Carlo (MCMC) methods, developed by the MRC Biostatistics Unit, at the University of Cambridge, UK. It is based on the BUGS (Bayesian inference Using Gibbs Sampling) project started in 1989. OpenBUGS is the open source variant of WinBUGS. Pros: flexibility and programming capabilities Cons: complex interface Price: free

147 Take home messages

148 Take home messages Advanced statistical methods are best seen as a set of modular tools which can be applied and tailored to the specific task of interest. The concept of generalized linear model highlights how most statistical methods can be considered part of a broader family of methods, depending on the specific framework or link function.

149 Many thanks for your attention! For these slides and similar slides:
For any query: For these slides and similar slides:


Download ppt "Medical statistics for cardiovascular disease Part 1"

Similar presentations


Ads by Google