5The EBM 3-step approachHow an article should be appraised, in 3 steps:Step 1 – Are the results of the study (internally) valid?Step 2 – What are the results?Step 3 – How can I apply these results to patient care?Guyatt and Rennie, Users’ guide to the medical literature, 2002
7The ultimate goal of any clinical or scientific observation is the appraisal of causality
8*statistics is important here Bradford Hill causality criteriaForce:* precisely defined (p<0.05, weaker criterion) and with strong relative risk (≤0.83 or ≥1.20) in the absence of multiplicity issues (stronger criterion)Consistency:* results in favor of the association must be confirmed in other studiesTemporality: exposition must precede in a realistic fashion the eventCoherence: hypothetical cause-effect relationship is not in contrast with other biologic or natural history findings*statistics is important hereMente et al, Arch Intern Med 2009
9*statistics is important here Bradford Hill causality criteriaBiologic gradient:* exposition dose and risk of disease are positively (or negatively) associated on a continuumExperimental: experimental evidence from laboratory studies (weaker criterion) or randomized clinical trials (stronger criterion)Specificity: exposition is associated with a single disease (does not apply to multifactorial conditions)Plausibility: hypothetical cause-effect relationship makes sense from a biologic or clinical perspective (weaker criterion)Analogy: hypothetical cause-effect relationship is based on analogic reasoning (weaker criterion)*statistics is important hereMente et al, Arch Intern Med 2009
10RandomizationIs the technique which defines experimental studies in humans (but not only in them), and enables the correct application of statistical tests of hypothesis in a frequentist framework (according to Ronal Fischer theory)Randomization means assigning at random a patient (or a study unit) to one of the treatmentsOver large numbers, randomization minimizes the risk of imbalances in patient or procedural features, but this does not hold true for small samples and for a large set of features
11Any clinical or scientific comparison can be viewed as… A battle between an underlying hypothesis (null, H0), stating that there is no meaningful difference or association (beyond random variability) between 2 or more populations of interest (from which we are sampling) and an alternative hypothesis (H1), which implies that there is a non-random difference between such populations.Any statistical test is a test trying to convince us that H0 is false (thus implying the working truthfulness of H1).
12FalsifiabilityFalsifiability or refutability of a statement, hypothesis, or theory is an inherent possibility to prove it to be false.A statement is called falsifiable if it is possible to conceive an observation or an argument which proves the statement in question to be false.In this sense, falsify is synonymous with nullify, meaning not "to commit fraud" but "show to be false
13Statistical or clinical significance? Statistical and clinical significance are 2 very different concepts.A clinically significant difference, if demostrated beyond the play of chance, is clinically relevant and thus merits subsequent action (if costs and tolerability issues are not overcoming).A statistically significant difference is a probabilistic concept and should be viewed in light of the distance from the null hypothesis and the chosen significance threshold.
19Samples and populations And this might be itsuniversal population
20Samples and populations But what if THIS is itsuniversal population?
21depend on our confidence Samples and populationsAny inference thusdepend on our confidencein its likelihood
22Alpha and type I errorWhenever I perform a test, there is thus a risk of a FALSE POSITIVE result, ie REJECTING A TRUE null hypothesis.This error is called type I, is measured as alpha and its unit is the p value.The lower the p value, the lower the risk of falling into a type I error (ie the HIGHER the SPECIFICITY of the test).
23Because I see something Alpha and type I errorType I error islike a MIRAGEBecause I see somethingthat does NOT exist
24Beta and type II errorWhenever I perform a test, there is also a risk of a FALSE NEGATIVE result, ie NOT REJECTING A FALSE null hypothesis.This error is called type II, is measured as beta, and its unit is a probability.The complementary of beta is called power.The lower the beta, the lower the risk of missing a true difference (ie the HIGHER the SENSITIVITY of the test).
25Because I do NOT see something that exists Beta and type II errorType II error islike being BLINDBecause I do NOT see something that exists
26Accuracy and precision true valuemeasurementdistancespreadAccuracy measures the distance from the true valuePrecision measures the spead in the measurements
28Accuracy and precision Thus:Precision expresses the extent of RANDOM ERRORAccuracy expresses the extent of SYSTEMATIC ERROR (ie bias)
29ValidityInternal validity entails both PRECISION and ACCURACY (ie does a study provide a truthful answer to the research question?)External validity expresses the extent to which the results can be applied to other contexts and settings. It corresponds to the distinction between SAMPLE and POPULATION)
30Intention-to-treat analysis Intention-to-treat (ITT) analysis is an analysis based on the initial treatment intent, irrespectively of the treatment eventually administered.ITT analysis is intended to avoid various types of bias that can arise in intervention research, especially procedural, compliance and survivor bias.However, ITT dilutes the power to achieve statistically and clinically significant differences, especially as drop-in and drop-out rates rise.
31Per-protocol analysis In contrast to the ITT analysis, the per-protocol (PP) analysis includes only those patients who complete the entire clinical trial or other particular procedure(s), or have complete data.In PP analysis each patient is categorized according to the actual treatment received, and not according to the originally intended treatment assignment.PP analysis is largely prone to bias, and is useful almost only in equivalence or non-inferiority studies.
32ITT vs PP45 pts treated with A, 5 shifted to B because of poor global health (all 5 died)50 pts to group A (more toxic)100 pts enrolledRANDOMIZATIONACTUAL THERAPY50 pts to group B (conventional Rx, less toxic)50 patients treated with B (none died)
33ITT vs PP45 pts treated with A, 5 shifted to B because of poor global health (all 5 died)50 pts to group A (more toxic)100 pts enrolledRANDOMIZATIONACTUAL THERAPY50 pts to group B (conventional Rx, less toxic)50 patients treated with B (none died)ITT: 10% mortality in group A vs 0% in group B, p=0.021 in favor of B
34ITT vs PP45 pts treated with A, 5 shifted to B because of poor global health (all 5 died)50 pts to group A (more toxic)100 pts enrolledRANDOMIZATIONACTUAL THERAPY50 pts to group B (conventional Rx, less toxic)50 patients treated with B (none died)ITT: 10% mortality in group A vs 0% in group B, p=0.021 in favor of BPP: 0% (0/45) mortality in group A vs 9.1% (5/55) in group B, p=0.038 in favor of A
35Mean (arithmetic) Characteristics: -summarises information well -discards a lot of information (dispersion??)Assumptions:-data are not skeweddistorts the meanoutliers make the mean very different-Measured on measurement scalecannot find mean of a categorical measure‘average’ stent diameter may be meaningless
36Median What is it? The one in the middle Place values in order Median is centralDefinition:Equally distant from all other valuesUsed for:Ordinal dataSkewed data / outliers
37Standard deviation S = 1 ) ( - N x SD Standard deviation (SD): approximates population σas N increasesAdvantages:with mean enables powerful synthesismean±1*SD 68% of datamean±2*SD 95% of data (1.96)mean±3*SD 99% of data (2.86)Disadvantages:is based on normal assumptions1)(2-S=NxSDVariance
38Interquartile range 1st-3rd Quartile =16.5; 23.5 Interquartile Range Variable typeContinuousPatient IDLesion Length1114615716317118810199211222523224425132725% to 75% percentileor1° to 3° quartile16.51st-3rd Quartile=16.5; 23.5Interquartile Range= =7.0Median23.5
39Coefficient of variation Standard deviationMeanCV = x 100Coefficient of variation (CV) is a index of relative variabilityCV is dimensionlessCV enables you to compare data dispersion of variables with different units of measurement
41Point estimation & confidence intervals Using summary statistics (mean and standard deviation for normal variables, or proportion for categorical variable) and factoring sample size, we can build confidence intervals or test hypotheses that we are sampling from a given population or notThis can be done by creating a powerful tool, which weighs our dispersion measures by means of the sample size: the standard error
42First you need the SEWe can easily build the standard error of a proportion, according to the following formula:Where variance=P*(1-P) and n is the sample sizeP * (1-P)SE =n
43Point estimation & confidence intervals We can then create a simple test to check whether the summary estimate we have found can be compatible according to random variation with the corresponding reference population meanThe Z test (when the population SD is known) and the t test (when the population SD is only estimated), are thus used, and both can be viewed as a signal to noise ratio
44Signal to noise ratioSignalSignal to noise ratio=Noise
45From the Z test… Signal Signal to noise ratio Noise = Absolute differencein summaryestimatesZ score =Standard errorResults of z score correspond to a distinct tail probability of the Gaussian curve (eg 1.96 corresponds to a one-tailed probability or two-tailed probability)
46…to confidence intervals Standard error (SE or SEM) can be used to test a hypothesis or create a confidence interval (CI) around a mean for a continuous variable (eg mortality rate)95% CI = mean ± 2 SESDSE =n95% means that, if we repeat the study 20 times, 19 times out of 20 we will included the true population average
47Ps and confidence intervals P values and confidence intervalsare strictly connectedAny hypothesis test providing a significant result (eg p=0.045) means that we can be confident at 95.5% that the population average difference lies far from zero (ie the null hypothesis)
48P values and confidence intervals Important differenceTrivial differenceHosignificantdifference(p<0.05)non significantdifference(p>0.05)
49Power and sample sizeWhenever designing a study or analyzing a dataset, it is important to estimate the sample size or the power of the comparison.SAMPLE SIZESetting a specific alpha and a specific beta, you calculate the necessary sample size given the average inter-group difference and its variation.POWERGiven a specific sample size and alpha, in light of the calculated average inter-group difference and its variation, you obtain an estimate of the power (ie 1-beta).
50Hierachy of analysis A statistical analysis: Univariate (e.g. when describing mean or standard deviation)Bivariate (e.g. when comparing age in men adn women)Multivariable (e.g. when appraising how age and gender impact on the risk of death)Multivariate (e.g. when appraising how age and gender simultaneously impact on risk of death and hospital costs)
51Types of variables Variables eg eg blood pressure measured PAIREDORREPEATEDMEASURESUNPAIREDORINDEPENDENTMEASURESegblood pressure measuredtwice in the same patientsat different timesegblood pressure measuredin several different groupsof patients only once
52Types of variables Variables CATEGORY QUANTITY nominal ordinal discretecontinuousorderedcategoriesrankscountingmeasuring52
53Statistical tests Categorical data: compare proportions in groups Are data categorical or continuous?Categorical data: compare proportions in groupsContinuous data: compare means or medians in groupsHow many groups?Two groups; normal data?More than two groups; normal data?Non-normal data; use Mann Whitney U testNon-normal data; use Kruskal WallisNormal data; use ANOVANormal data; use t test
54Student t testIt is used to test the null hypothesis that the means of two normally distributed populations are equalGiven two data sets (each with its mean, SD and number of data points) the t test determines whether the means are distinct, provided that the underlying distributions can be assumed to be normalthe Student t test should be used if the variances (not known) of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t test
581-way ANOVAAs with the t-test, ANOVA is appropriate when the data are continuous, when the groups are assumed to have similar variances, and when the data are normally distributedANOVA is based upon a comparison of variance attributable to the independent variable (variability between groups or conditions) relative to the variance within groups resulting from random chance. In fact, the formula involves dividing the between-group variance estimate by the within-group variance estimate
60Kruskal Wallis test Post-hoc analysis with Mann Withney U and Bonferroni correction
61Compare continuous variables Three (or more) paired groupsAgain ask yourself… Parametric or not?If parametric: ANOVA for repeated measuresin SPSS… in the General Linear ModelIf non-parametric: Friedman test
632-way ANOVAA mixed-design ANOVA is used to test for differences between independent groups whilst subjecting participants to repeated measures. In a mixed-design ANOVA model, one factor is a between-subjects variable (drug) and the other is within-subjects variable (BP)CAMELOT, JAMA 2004
64Is the percentage of diabetics in this sample Binomial testIs the percentage of diabetics in this samplecomparable with the known chronic AF population? We assume the population rate is at 15%
65Compare discrete variables The second basis is the “observed”-“expected” relation
66Compare event rates Absolute Risk (AR) 7.9% (47/592) & 15.1% (89/591) Absolute Risk Reduction (ARR) 7.9% (47/592) – 15.1% (89/591) = -7.2%Relative Risk (RR) 7.9% (47/592) / 15.1% (89/591) = 0.52 (given an equivalence value of 1)Relative Risk Reduction (RRR) 1 – 0.52 = 0.48 or 48%Odds Ratio (OR) 8.6% (47/545) / 17.7% (89/502) = 0.49 (given an equivalence value of 1)
67Post-hoc groups This is a sub-group ! Bonferroni ! the chi-square test was used to determine differences between groups with respect to the primary and secondary end points. Odds ratios and their 95 percent confidence intervals were calculated. Comparisons of patient characteristics and survival outcomes were tested with the chi-square test, the chi-square test for trend, Fisher's exact test, or Student's t-test, as appropriate.This is a sub-group !Bonferroni !The level of significantp-value should bedivided by the numberof tests performed…Or the computed p-value,multiplied for the numberof tests… P=0.12 and not P=0.04 !!Wenzel et al, NEJM 2004
68Fisher Exact test Exp Ctrl r1 Event a b No event c d r2 s1 s2 N s1! * s2! * r1! * r2!P =N! * a! * b! * c! * d!
69Baseline asymptomatic Baseline asymptomatic McNemar testThe McNemar test is a hypothesis test to compare categorical variables in related samples.For instance, how can I appraise the statistical significance of change into symptom status (asymptomatic vs symptomatic) in the same patients over time?The McNemar test exploits the discordant pairs to generate a p valueFollow-upsymptomaticasymptomaticBaseline symptomatic153Baseline asymptomatic517Follow-upsymptomaticasymptomaticBaseline symptomatic15Baseline asymptomatic817p=0.72 at McNemar testp=0.013 at McNemar test
71Take home messagesBiostatistics is best seen as a set of different tools and methods which are used according the problem at hand.Nobody is experienced in statistics at beginning, and only by facing everyday real-world problems you can familiarize yourself with different techniques and approaches.In general terms, it is also crucial to remember that the easiest and simplest way to solve a statistical problem, if appropriate, is also the best and the one recommended by reviewers.
72Many thanks for your attention! For any query:For these slides and similar slides:For similar slides:
73Medical statistics for cardiovascular disease Part 2 Giuseppe Biondi-Zoccai, MDSapienza University of Rome, Latina, Italy
75Linear regressionWhich of these different possible lines that I can graphically trace and compute is the best regression line?It can be intuitively understood that it is the line that minimizes the differences between observed values (yi) and estimated values (yi’)
76CorrelationThe square root of the coefficient of determination (R2) is the correlation coefficient (R) and shows the degree of linear association between 2 continuous variables, but disregards causation.K. PearsonAssumes values between -1.0 (negative association), 0 (no association), and +1.0 (positive association).It can be summarized as a point summary estimate, with specific standard error, 95% confidence interval, and p value.
77Dangers of not plotting data 4 sets of data: all with the same R=0.81!**At linear regression analysis
78What about non-linear associations? Each number correspond to the correlation coefficient for linear association (R)!!!
79Pearson vs SpearmanWhenever the independent and dependent variables can be assumed to belong to normal distributions, the Pearson linear correlation method can be used, maximizing statistical power and yield.Whenever the data are sparse, rare, and/or not belonging to normal distributions, the non-parametric Spearman correlation method should be used, which yields the rank correlation coefficient (rho), but not its R2.C. Spearman
80Bland Altman plot Difference of A – B in each case Mean of measurement A and B in each case
81Regression to the mean: don’t bet on past rookies of the year!
84Logistic regressionWe model ln [p/(1-p)] instead of just p, and the linear model is written :ln [p/(1-p)] = ln(p) – ln(1-p) = β0 + β1*XLogistic regression is based on the logit which transforms a dichotomous dependent variable into a continuous one
85Generalized Linear Models All generalized linear models have three components :Random component identifies the response variable and assumes a probability distribution for itSystematic component specifies the explanatory variables used as predictors in the model (linear predictor).Link describes the functional relationship between the systematic component and the expected value (mean) of the random component.The GLM relates a function of that mean to the explanatory variables through a prediction equation having linear form.The model formula states that: g(µ) = α + β1x1 + … + βkxk
86Generalized Linear Models Through differing link functions, GLM corresponds to other well known modelsDistributionNameLink FunctionMean FunctionNormalIdentityExponentialInverseGammaInverse GaussianInverse squaredPoissonLogBinomialLogit
87Survival analysisPatients experiencing one or more events are called respondersPatients who, at the end of the observational period or before such time, get out of the study without having experienced any event, are called censored
88Survival analysis A B C D E F x Study end A B C D E F x Study end xWithdrawnStudy endLostABCDEFxWithdrawnStudy endLostA and F: events B, C, D and E: censored88
92Multivariable statistical methods Goal is to explain the variation in the dependent variable by other variables simultaneously.Independent and dependent variablesIndependentDependentPredictorsRegressorsExplanatoryPrognostic FactorsManipulatedResponseDep. Var.: is influenced by …Ind. Var. : have an effect on ...
93Bivariate statistical methods One Dep. Var. ~ One Ind. Var.I.V.QualitativeChi²D. V.QualitativeQuantitativeLogistic Reg.QuantitativeQualitativeI.V.Anova 1QuantitativeSimple Regression
94Multivariable statistical methods One Dep. Var. ~ Several Ind. Var.I.V.QualitativeChi²D. V.QualitativeQuantitativeLogistic Reg.QuantitativeQualitativeI.V.Anova 1QuantitativeSimple Regression
95Multivariable analysis The methods mentioned have specific application domains depending on the nature of the variables involved in the analysis.But conceptually and qua calculation there are a lot of similarities between these techniques.Each of the multivariable methods evaluates the effect of an independent variable on the dependent variable, controlling for the effect of other independent variables.Methods such as multiple regression, multi-factor ANOVA, analysis of covariance have the same assumptions towards the distribution of the dependent variable.We will learn more about the concepts of multivariable analysis by reviewing the simple linear regression model.
96Multiple linear regression Simple linear regression is a statistical model to predict the value of one continuous variable Y (dependent, response) from another continuous variable X (independent, predictor, covariate, prognostic factor).Multiple linear regression is a natural extension of the simple linear regression modelWe use it to investigate the effect on the response variable of several predictor variables, simultaneouslyIt is a hypothetical model of the relationship between several independent variables and a response variable.Let’s start by reviewing the concepts of the simple linear regression model.
97Multiple regression models Model terms may be divided into the following categoriesConstant termLinear terms / main effects (e.g. X1)Interaction terms (e.g. X1X2)Quadratic terms (e.g. X12)Cubic terms (e.g. X13)Models are usually described by the highest term presentLinear models have only linear termsInteraction models have linear and interaction termsQuadratic models have linear, quadratic and first order interaction termsCubic models have terms up to third order.
98The model-building process Source:Applied Linear Statistical Models,Neter, Kutner, Nachtsheim, Wasserman
99AIC and BIC AIC = 2k + n [ln (SSError / n)] AIC (Akaike Information Criterion) and BIC (Swarz Information Criterion) are two popular model selection methods. They not only reward goodness of fit, but also include a penalty that is an increasing function of the number of estimated parameters. This penalty discourages overfitting. The preferred model is the one with the lowest value for AIC or for BIC. These criteria attempt to find the model that best explains the data with a minimum of free parameters. The AIC penalizes free parameters less strongly than does the Schwarz criterion.AIC = 2k + n [ln (SSError / n)]BIC = n ln (SSError / n) + k ln(n)
100Two-Factor ANOVA Introduction A method for simultaneously analyzing two factors affecting a response.Group effect: treatment group or dose levelBlocking factor whose variation can be separated from the error variation to give more precise group comparisons: study center, gender, disease severity, diagnostic group, …One of the most common ANOVA methods used in clinical trial analysis.Similar assumptions as for single-factor anova.Non-parametric alternative : Friedman test
101Two-Factor ANOVA The Model Effect of treatment factor (a levels or i columns )Response score of subject k in column i and row jInteraction effectOverall MeanEffect of blocking factor (b levels or j rows)Error or Effect of not measured variables
102Analysis of Covariance ANCOVA Method for comparing response means among two or more groups adjusted for a quantitative concomitant variable, or “covariate”, thought to influence the response.The response variable is explained by independent quantitative variable(s) and qualitative variable(s).Combination of ANOVA and regression.Increases the precision of comparison of the group means by decreasing the error variance.Widely used in clinical trials
103Analysis of Covariance The model The covariance model for a single-factor with fixed levels adds another term to the ANOVA model, reflecting the relationship between the response variable and the concomitant variable.The concomitant variable is centered around the mean so that the constant µ represents the overall mean in the model.
104Repeated-Measures Basic concepts ‘Repeated-measures’ are measurements taken from the same subject (patient) at repeated time intervals.Many clinical studies require:multiple visits during the trialresponse measurements made at each visitA repeated measures study may involve several treatments or only a single treatment.‘Repeated-measures’ are used to characterize a response profile over time.Main research question:Is the mean response profile for one treatment group the same as for another treatment group or a placebo group ?Comparison of response profiles can be tested with a single F-test.
105Repeated-Measures Comparing profiles Source:Common Statistical Methods for Clinical Research, 1997, Glenn A. Walker
106Repeated Measures ANOVA Random Effects – Mixed Model What are your conclusions about the between subjects species effect and the within subjects season effect ?Prediction Formula
109Multiple Regression SPSS Variable Selection Methods Enter. A procedure for variable selection in which all variables in a block are entered in a single step.Forward Selection (Likelihood Ratio). Stepwise selection method with entry testing based on the significance of the score statistic, and removal testing based on the probability of a likelihood-ratio statistic based on the maximum partial likelihood estimates.Backward Elimination (Likelihood Ratio). Backward stepwise selection. Removal testing is based on the probability of the likelihood-ratio statistic based on the maximum partial likelihood estimates.
110Cox PH analysis Problem Can’t use ordinary linear regression because how do we account for the censored data?Can’t use logistic regression without ignoring the time componentwith a continuous outcome variable we use linear regressionwith a dichotomous (binary) outcome variable we use logistic regressionwhere the time to an event is the outcome of interest, Cox regression is the most popular regression technique
114Question: When there are many confounding covariates needed to adjust for: Matching: based on many covariates is not practicalStratification: is difficult, as the number of covariates increases, the number of strata grows exponentially:1 covariate: 2 strata 5 covariates: 32 (25) strataRegression adjustment: may not be possible: potential problem: over-fitting
115Propensity scoreReplace the collection of confounding covariates with one scalar function of these covariatesAgeGenderEjection fractionRisk factorsLesion characteristics…1 composite covariate:Propensity ScoreBalancing score
117Compare treatments with propensity score Three common methods of using the propensity score to adjust results:MatchingStratificationRegression adjustment
118Goal of a clinical trial is appraisal of… Superiority: difference in biologic effect or clinical effectEquivalence: lack of meaningful/clinically relevant difference in biologic effect or clinical effectNon-inferiority: lack of meaningful/clinically relevant increase in adverse clinical event
119Superiority RCT Possibly greatest medical invention ever Randomization of adequate number of subjects ensures prognostically similar groups at study beginningIf thorough blinding is enforced, even later on groups maintain similar prognosis (except for effect of experiment)Sloppiness/cross-over makes arm more similar -> traditional treatment is not discardedPer-protocol analysis almost always misleading
120Equivalence/non-inferiority RCT Completely different paradigmGoal is to conclude new treatment is not “meaningfully worse” than comparatorRequires a subjective marginSloppiness/cross-over makes arm more similar -> traditional treatment is more likely to be discardedPer-protocol analysis possibly useful to analyze safety, but bulk of analysis still based on intention-to-treat principle
121Superiority, equivalence or non-inferiority? Vassiliades et al, JACC 2005
122Possible outcomes in a non-inferiority trial (observed difference & 95% CI) New Treatment Better New Treatment Worse
123Typical non-inferiority design Hiro et al, JACC 2009
124Cumulative meta-analysis Antman et al, JAMA 1992
125Meta-analysis of intervention studies De Luca et al, EHJ 2009
127Indirect and network meta-analyses Direct plusindirect(i.e. network)Jansen et al, ISPOR 2008
128ResamplingResampling refers to the use of the observed data or of a data generating mechanism (such as a die or computer-based simulation) to produce new hypothetical samples, the results of which can then be analyzed.The term computer-intensive methods also is frequently used to refer to techniques such as these…
129BootstrapThe bootstrap is a modern, computer-intensive, general purpose approach to statistical inference, falling within a broader class of resampling methods.Bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution.One standard choice for an approximating distribution is the empirical distribution of the observed data.
130JacknifeJacknifing is a resampling method based on the creation of several subsamples by excluding a single case at the time.Thus, the are only N jacknife samples for any given original sample with N cases.After the systematic recomputation of the statistic estimate of choice is completed, an point estimate and an estimate for the variance of the statistic can be calculated.
132The Bayes theoremThe main feature of Bayesian statistics is that it takes into account prior knowledge of the hypothesis
133Bayes theorem P (H | D) P (D | H) * P (H) _____________ P (H | D) = Prior (or marginal) probabilityof hypothesisLikelihood ofhypothesis (orconditionalprobability of B)P (H | D) P (D | H) * P (H)_____________P (D)P (H | D) =Posterior (or conditional)probability of hypothesis HProbability of the data (prior or marginal probability of B: normalizing constant)Thus it relates the conditional and marginal probabilities of two random events and it is often used to compute posterior probabilities given observations.
135“Classical” statistical inference vs Bayesians inference
136Before the next module, a question for you: who is a Bayesian? A Bayesian is who, vaguely expecting a horse, and catching aglimpse of a donkey, strongly believes he has seen a mule
137Before the next module, a question for you: who is a Bayesian? A Bayesian is who, vaguely expecting a horse, and catching aglimpse of a donkey, strongly believes he has seen a mule
138Before the next module, a question for you: who is a Bayesian? A Bayesian is who, vaguely expecting a horse, and catching aglimpse of a donkey, strongly believes he has seen a mule
139JMP Statistical Discovery Software JMP is a software package that was first developed by John Sall, co-founder of SAS, to perform simple and complex statistical analyses. It dynamically links statistics with graphics to interactively explore, understand, and visualize data. This allows you to click on any point in a graph, and see the corresponding data point highlighted in the data table, and other graphs.JMP provides a comprehensive set of statistical tools as well as design of experiments and statistical quality control in a single package.JMP allows for custom programming and script development via JSL, originally know as "John's Scripting Language“.An add-on JMP Genomics comes with over 100 analytic procedures to facilitate the treatment of data involving genetics, microarrays or proteomics.Pros: very intuitive, lean package for design and analysis in researchCons: less complete and less flexible than the complete SAS systemPrice: €€€€.
140RR is a programming language and software environment for statistical computing and graphics, and it is an implementation of the S programming language with lexical scoping semantics.R is widely used for statistical software development and data analysis. Its source code is freely available under the GNU General Public License, and pre-compiled binary versions are provided for various operating systems. R uses a command line interface, though several graphical user interfaces are available.Pro: flexibility and programming capabilities (eg for bootstrap), sophisticated graphical capabilities.Cons: complex and user-unfriendly interface.Price: free.
141S and S-PlusS-PLUS is a commercial package sold by TIBCO Software Inc. with a focus on exploratory data analysis, graphics and statistical modelingIt is an implementation of the S programming language. It features object-oriented programming capabilities and advanced analytical algorithms (eg for robust regression, repeated measurements, …)Pros: flexibility and programming capabilities (eg for bootstrap), user-friendly graphical user interfaceCons: complex matrix programming environmentPrice: €€€€-€€.
142SASSAS (originally Statistical Analysis System, 1968) is an integrated suite of platform independent software modules provided by SAS Institute (1976, Jim Goodnight and Co).The functionality of the system is very complete and built around four major tasks: data access, data management, data analysis and data presentation.Applications of the SAS system include: statistical analysis, data mining, forecasting; report writing and graphics; operations research and quality improvement; applications development; data warehousing (extract, transform, load).Pros: very complete tool for data analysis, flexibility and programming capabilities (eg for Bayesian, bootstrap, conditional, or meta-analyses), large volumes of dataCons: complex programming environment, labyrinth of modules and interfaces, very expensivePrice: €€€€-€€€€
143StatisticaSTATISTICA is a powerful statistics and analytics software package developed by StatSoft, Inc.Provides a wide selection of data analysis, data management, data mining, and data visualization procedures. Features of the software include basic and multivariate statistical analysis, quality control modules and a collection of data mining techniques.Pros: extensive range of methods, user-friendly graphical interface, has been called “the king of graphics”Cons: limited flexibility and programming capabilities, labyrinthPrice: €€€€.
144SPSSSPSS (originally, Statistical Package for the Social Sciences) is a computer program used for statistical analysis released in its first version in 1968 and now distributed by IBM.SPSS is among the most widely used programs for statistical analysis in social science. It is used by market researchers, health researchers, survey companies, government, education researchers, marketing organizations and others.Pros: extensive range of tests and procedures, user-friendly graphical interface.Cons: limited flexibility and programming capabilities.Price: €€€€.
145StataStata (name formed by blending "statistics" and "data“) is a general-purpose statistical software package created in 1985 by StataCorp.Stata's full range of capabilities includes: data management, statistical analysis, graphics generation, simulations, custom programming. Most meta-analyses tools were first developed for Stata, and thus this package offers one of the most extensive library of statistical tools for systematic reviewersPros: flexibility and programming capabilities (eg for bootstrap, or meta-analyses), sophisticated graphical capabilitiesCons: relatively complex interfacePrice: €€-€€€
146WinBUGS and OpenBUGSWinBUGS (Windows-based Bayesian inference Using Gibbs Sampling) is a statistical software for the Bayesian analysis of complex statistical models using Markov chain Monte Carlo (MCMC) methods, developed by the MRC Biostatistics Unit, at the University of Cambridge, UK. It is based on the BUGS (Bayesian inference Using Gibbs Sampling) project started in 1989.OpenBUGS is the open source variant of WinBUGS.Pros: flexibility and programming capabilitiesCons: complex interfacePrice: free
148Take home messagesAdvanced statistical methods are best seen as a set of modular tools which can be applied and tailored to the specific task of interest.The concept of generalized linear model highlights how most statistical methods can be considered part of a broader family of methods, depending on the specific framework or link function.
149Many thanks for your attention! For these slides and similar slides: For any query:For these slides and similar slides: