Presentation is loading. Please wait.

Presentation is loading. Please wait.

Some comments on the 3 papers Robert T. O’Neill Ph.D.

Similar presentations


Presentation on theme: "Some comments on the 3 papers Robert T. O’Neill Ph.D."— Presentation transcript:

1 Some comments on the 3 papers Robert T. O’Neill Ph.D

2 Comments on G. Anderson u WHISH is a nice example u Randomiztion (Zelen) but using different sources of data for outcome u Outcome data: self reported, adjudicated for medical records, Medicare claim (hybrid-ability to estimate SE and SP u Impact of outcome misclassification u Event data not defined by protocol-you depend on the health care system u Claims data DO NOT provide standardized data – see Mini-Sentinel and OMOP

3 Comments on A J Cook u Key component is randomization at patient or clinic level and use of electronic health record for data capture (cluster randomization addresse different issues) u Missing data, informative censoring, switching, measuring duration of exposure (repeat Rx, gaps), different answers depending upon definition u Validation of outcomes makes the pragmatic trial less simple u Only some outcomes (endpoints), populations, questions are addressable before complexities of interpretation overwhelm

4 Comments on M Gaffney u Precision and Eagle are not large simple trials – they are large difficult trials u Outcome adjudication, monitoring strategies u Non-inferiority poses significant challenges for pragmatic trials – generally no assay sensitivity u Margin selection based upon evidence vs. based upon close enough but not sure if both are equally good or bad

5 Other comments on NI studies u Pre-specifying the margins – why and what is the difference in these two situations u What treatment difference is detectable and credible with the playoff of bias and huge sample size u When pre-specification is not possible because there is no historical information, the width of the confidence interval makes sense – but two conclusions – both treatments the same and comparably effective vs. both the same but both ineffective u What endpoints are eligible : Hard endpoints (y), patient symptoms(n)

6 Other comments u Are NI designs appropriate for claims data of EHR without independent all case adjudication – implications for poor sensitivity and specificity to drive estimate to null – what does a null result mean u Experience suggests that Exposure (drugs) has better accuracy than diagnoses or procedure in claims data bases(outcomes) u Duration of exposure dependent upon algorithms for repeat prescriptions – different results depending upon definitions of gaps between repeated RX

7 Can randomization overcome lack of blinding and personal choices after randomization u Use of observational methods of accounting for unmeasured confounding of assigned treatment and time to event outcomes subject to censoring u Directed Acylic Graphs to explore the confounding-censoring problem - diagnostics u Instrumental variables

8 Lessons learned from Mini-Sentinel and the Observational Medical Outcomes Partnership (OMOP) u Distributed data models u Common data models u Limits of detectability of effect sizes of two or more competing agents – calibration, interpretation of p-values for non randomized studies u Not all outcomes and exposures can be dealt with in similar manner u Know the limitations of your data base – is this possible in advance of conducting the study – part of the intensive study planning, protocol and prospective analysis plan

9 An example of Medicare data use but not a RCT

10 Some other views and opinions on CER using the learning health care system

11 Lessons Learned from OMOP and Mini- Sentinel About observational studies using health care claims data or EHR – but no randomization u Lessons about the limitations of the data bases, outcome capturing, ascertainment, missing data (confounders) are relevant to the RCT use of the same data source u Lessons about data models, and challenges for data (outcome standardization) http://www.mini-sentinel.org/ http://omop.org/

12 The Observational Medical Outcomes Partnership – many findings

13 Some ideas on what to evaluate about a given data source before committing to conducting a study – focus on observational studies – but also relevant to pragmatic RCTs

14 How do these presentations relate to pragmatic trials within a health care system u Two or more competing therapies on a formulary – never compared with each other u Randomize patients under equipoise principle – do you need patient consent, physician consent if health plan and no data to think ‘I or We don’t know but want to find out ‘ u Collect electronic medical record data, including exposures and outcomes – and decide if any additional adjudication is needed u Analyze according to best practices – but with some prospective SAPs – causal inference strategies ?


Download ppt "Some comments on the 3 papers Robert T. O’Neill Ph.D."

Similar presentations


Ads by Google