Development of PRO Content

Slides:



Advertisements
Similar presentations
Development of Patient-Centered Questionnaires FDA/Industry Workshop – Washington DC 2005 Cindy Rodenberg, Ph.D Procter & Gamble Pharmaceuticals.
Advertisements

Labeling claims for patient- reported outcomes (A regulatory perspective) FDA/Industry Workshop Washington, DC September 16, 2005 Lisa A. Kammerman, Ph.D.
Developing Satisfaction Surveys: Integrating Qualitative and Quantitative Information David Cantor, Sarah Dipko, Stephanie Fry, Pamela Giambo and Vasudha.
ASSESSING RESPONSIVENESS OF HEALTH MEASUREMENTS. Link validity & reliability testing to purpose of the measure Some examples: In a diagnostic instrument,
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
1 Epoetin Alpha: FDA Overview of Patient Reported Outcome (PRO) Claims Ann Marie Trentacosti, M.D. Study Endpoints and Labeling Office of New Drugs Food.
Reviewing and Critiquing Research
Chapter 13: Descriptive and Exploratory Research
Business research methods: data sources
Transcultural Research Prof. N.J.Mathers Institute of primary and community care University of Sheffield.UK 06/Nov/2007.
Chapter 3 Preparing and Evaluating a Research Plan Gay and Airasian
The phases of research Dimitra Hartas. The phases of research Identify a research topic Formulate the research questions (rationale) Review relevant studies.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT OSMAN BIN SAIF Session 14.
Questionnaires and Interviews
Cognitive Interviewing for Question Evaluation Kristen Miller, Ph.D. National Center for Health Statistics
1 Lecture 2: Types of measurement Purposes of measurement Types and sources of data Reliability and validity Levels of measurement Types of scale.
Construction and Evaluation of Multi-item Scales Ron D. Hays, Ph.D. RCMAR/EXPORT September 15, 2008, 3-4pm
FDA Approach to Review of Outcome Measures for Drug Approval and Labeling: Content Validity Initiative on Methods, Measurement, and Pain Assessment in.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Week 8: Research Methods: Qualitative Research 1.
Evaluating a Research Report
#1 STATISTICS 542 Intro to Clinical Trials Quality of Life Assessment.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
The Practical Art of Endpoint Selection: Industry Perspectives A View from the Pharma Industry of the FDA Guidance on PROs Glenn A. Phillips, Ph.D. Director.
OLGA: A nice niche designed to meet your evolving needs Pennifer Erickson, Ph.D. Presented at the Spring meeting of the Special Library Association’s Pharmaceutical.
1 Copyright © 2011 by Saunders, an imprint of Elsevier Inc. Chapter 8 Clarifying Quantitative Research Designs.
PFF Teal = MAIN COLORS PFF Green = Light Green = Red = HIGHLIGHT COLORS Light Grey = Dark Grey =
1. 2 Issues in the Design and Testing of Business Survey Questionnaires: Diane K. Willimack U.S. Census Bureau Economic Census The International.
Critiquing Quantitative Research.  A critical appraisal is careful evaluation of all aspects of a research study in order to assess the merits, limitations,
Guidance for Industry Patient-Reported Outcome Measures: Use in Medical Product Development in Support of Labeling Claims Final Guidance from a Medical.
CoRPS London 26 & 27 October 2010 Center of Research on Psychology in Somatic diseases Understanding PRO in hematological disorders: Do we have a consensus?
Instrument Development and Psychometric Evaluation: Scientific Standards May 2012 Dynamic Tools to Measure Health Outcomes from the Patient Perspective.
Research in natural settings 2 Qualitative research: Surveys and fieldwork Macau University of Science and Technology.
New Survey Questionnaire Indicators in PISA and NAEP
17 March 2017 | DIA Statistics Community Webinar
EVALUATING EPP-CREATED ASSESSMENTS
An Analysis of D&I Applications
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Writing a sound proposal
DATA COLLECTION METHODS IN NURSING RESEARCH
Jennifer Crafts, Westat Audrey Kindlon, National Science Foundation,
Patient Focused Drug Development An FDA Perspective
How to Research Lynn W Zimmerman, PhD.
Classroom Assessment A Practical Guide for Educators by Craig A
Leacock, Warrican and Rose (2009)
The Literature Search and Background of the Problem
Data Collection Methods
Information Gathering Using Quantitative Methods
Deputy Director, Division of Biostatistics No Conflict of Interest
Development of an electronic personal assessment questionnaire to capture the impact of living with a vascular condition: ePAQ-VAS Patrick Phillips, Elizabeth.
AN INTRODUCTION TO EDUCATIONAL RESEARCH.
Developing a Methodology
Methods Choices Overall Approach/Design
Critical Reading of Clinical Study Results
Alignment Dr. Mary Clisbee
Chapter Eight: Quantitative Methods
Research proposal MGT-602.
Safety Culture Self-Assessment Methodology
Data and Data Collection
Progress Report on the Patient Reported Outcomes Harmonization Team
BU Career Development Grant Writing Course- Session 3, Approach
Disclaimer The views and opinions expressed in the following PowerPoint slides are those of the individual presenter and should not be attributed to.
Development Plans: Study Design and Dose Selection
Marketing Research: Course 4
Analyzing Reliability and Validity in Outcomes Assessment
Chapter 1 Assessment Basics
Regulatory Perspective of the Use of EHRs in RCTs
Interreg-IPA Cross-border Cooperation Programme Romania-Serbia
Assessment Chapter 3.
Presentation transcript:

Development of PRO Content Steven Blum Director, Patient Reported Outcomes Duke Industry Statistics Symposium September 7, 2017

Disclaimer Steven Blum is an employee and shareholder of GlaxoSmithKline (GSK). The views and opinions expressed in the following slides are mine and should not be attributed to GSK or any other organization

Patient-Reported Outcomes A Patient-Reported Outcome (PRO) measures the concepts that are relevant and important to a patient’s condition and its treatment directly from the patient without interpretation by a clinician. PROs are a type of Clinical Outcome Assessment (COA). Other types of COAs include Clinician-Reported Outcomes (ClinROs), Observer-Reported Outcomes (ObsROs) and Performance Outcome (PerfOs) Measures The principals of the PRO Guidance apply to both PROs and other COAs

Selecting Type of COA Measure Observable* concepts (e.g., signs, events, behaviors) Unobservable concepts (e.g., feelings, sensations) No clinical judgment needed Clinical judgment needed Self-report feasible and appropriate? No Yes ObsRO PRO ClinRO PRO * Observable concepts: must be able to be detected by a sense or senses -- vision, hearing, smell, or touch Source: Critical Path Institute - PRO Consortium

Patient-Reported Outcomes In clinical trials, PROs provide evidence of treatment benefit with respect symptoms, impact and function known only to the patient. PROs can also be used to capture the patient’s perspective on observable signs and functional impact In addition to being used in Clinical Trials, PROs can be used in real-world studies (e.g. surveys, registries, observational studies) as well as used in clinical practice and in quality measures The content of a PRO is developed through qualitative research to elicit key concepts of disease experience and impact from patients representing the target population. The US Food and Drug Administration and the European Medicines Agency emphasize the need to establish content validity as the initial step in developing a PRO.

Evaluation of PROs by Regulatory Authorities The evaluation of a PRO instrument to support claims in medical product labeling includes the following considerations: The population enrolled in the clinical trial The clinical trial objectives and design The PRO instrument’s conceptual framework The PRO instrument’s measurement properties Reliability Content Validity Construct Validity Ability to detect change Context of Use Source: FDA PRO Guidance (2009)

Content Validity Content validity is the extent to which the instrument measures the concept of interest. Content validity is supported by evidence from qualitative studies that the items and domains of an instrument are appropriate and comprehensive relative to its intended measurement concept, population, and use. Content validity is specific to the population, condition, and treatment to be studied (context of use). For PRO instruments, items, domains, and general scores reflect what is important to patients and comprehensive with respect to patient concerns relevant to the concept being assessed. Documentation of patient input in item generation as well as evaluation of patient understanding through cognitive interviewing can contribute to evidence of content validity. Evidence of other types of validity (e.g., construct validity) or reliability (e.g., consistent scores) will not overcome problems with content validity because we evaluate instrument adequacy to measure the concept represented by the labeling claim. It is important to establish content validity before other measurement properties are evaluated. Guidance for Industry - Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims (2009): https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM193282.pdf

“Traditional” Development Process Preliminary Research Concept Elicitation Item Generation Cognitive Debriefing Psychometric Testing

FDA Roadmap to Patient-Focus Outcome Measurement https://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugDevelopmentToolsQualificationProgram/ucm284077.htm

Preliminary Research Prior to “development” of a new PRO, researchers should conduct preliminary research, including: Literature review of prior qualitative studies Identify relevant signs, symptoms and impacts associated with disease Informs on the development of protocol and interview guides Literature review of existing PRO/COA instruments Identify potential measures, concepts measured, potential items Conduct “gap analysis” to assess development history and adequacy of existing measures Seek input from experts (clinicians, patients, caregivers, methodologists) Conduct preliminary qualitative focus groups or interviews to develop disease understanding Identify, develop or revise conceptual disease model All of these preliminary steps help us to understand the disease or condition and informs on the design on qualitative studies including development of protocol, interview guide and related study documents.

Qualitative Interviews Concept Elicitation Cognitive Debriefing Used to identify relevant concepts (meaningful health aspects) and to understand the specific language used to describe the concept Can be conducted as focus groups or interviews Typically uses a semi-structured interview guide to allow for both spontaneous and probed responses Recorded for transcription and analysis Interviews are analyzed using Qualitative Data Analysis Software Development of coding framework, code book and summaries Assess Saturation of Concept Basis for Item Generation Used to assess relevance of items/ concepts selected and comprehension and understanding of the item stem, response scale, and recall period Used to revise/refine measure Typically uses “think aloud” method Problematic items can be revised or dropped All changes (revisions, item reduction, additions) are documented/detailed using an Item Tracking Matrix Original Item, Examples of Issues from Interviews, Revised Item, Rationale Cognitive Interviews can also be used when assessing equivalence between different modes of administration (P2E migration) or in translation and cultural adaptation

Analyzing Qualitative Data Caution: Sample Sizes are Often Small! Concept Elicitation: Often ~25-30 subjects (stop when reach saturation) Cognitive Interviews : Often ~15 subjects (3 rounds of 5 interviews) May need larger sizes if lots of heterogeneity Rare diseases and difficult to reach populations can be even smaller Recruitment focuses on diversity (e.g. age, gender, education level, disease status) Inclusion/exclusion criteria tends to be less restrictive than clinical trials May want to recruit a broader population than what will be recruited in clinical trials May incorporate some recruitment goals, but may not use strict recruitment quotas Demographics and clinical characteristics may not match prior clinical trials or real- world studies The analysis of qualitative data is not a quantitative process; there are no significance levels, effect sizes, or other quantitative metric. Qualitative analyses use respondent words and phrases as data, analyzing and classifying these data by concept and subconcept

Analyzing Qualitative Data Qualitative Interviews are deep and rich with data Unstructured data Code and analyze data using Qualitative Data Analysis software Try not to “over-quantify” data Look at frequency of mentions – by transcript and by mentions Be mindful of probing/follow-up Look at spontaneous vs. probed reporting Can also include rating exercises (severity, bother) Assess attributes like frequency, severity, duration, variability to help with framing of questions Analyzing coding process – inter-rater agreement (>90%) Assess Saturation (overall, sub-groups) Identify preferred patient language/terms for item generation Forms the basis for item generation process or justification for use or modification of existing measures

Coding Process The primary goal of transcript coding is to organize and catalog a patient’s descriptions of their experiences within the context of use. Coding is an iterative process with opportunities for data to be re-examined and reanalyzed until no new codes or code groupings are identified and all passages from the transcripts have been assigned one or more codes. Patrick et al. Value in Health (14):967-977, figure adapted by Mona Martin

Documenting Saturation Saturation: the point when no new relevant or important information emerges and collecting additional data will not add to the understanding of how patients perceive the concept of interest and the items in a questionnaire. FDA PRO Guidance (2009); Patrick et al. Value in Health (14):967-977

Mixed Methods Development Concept Elicitation (Qual) Item Generation Cognitive Debriefing (Qual) Pilot Survey 1 (Quant) Revision Pilot Survey 2 (Quant) Formal Testing (Quant) Combines both Qual-itative and Quant-itative data in the development of the measure Often done in an iterative process Example: Conduct traditional development Conduct Initial Pilot Quant Study to test measure (can use CTT, IRT, RMT) Identify potential issues (scaling issues, fit, redundancy, measurement gaps) Make additional revisions to measure Evaluate changes in additional cognitive interviews Conduct a second Wave Pilot Quant study to test revisions After revisions are complete, move to formal psychometric testing

Example: Pain Assessment for Lower Back Symptoms Original 0-10 NRS Response Scale Revised 4-level categorical VRS Item Threshold Map Person-Item Map Blum et al., ISOQOL, 2016

Conceptual Framework The conceptual framework explicitly defines the concepts measured by the instrument in a diagram that presents a description of the relationships between items, domain (subconcepts), and concepts measured and the scores produced by a PRO instrument. The conceptual framework of a PRO instrument will evolve and be confirmed over the course of instrument development as a sponsor gathers empiric evidence to support item grouping and scores (e.g. psychometric evaluation, factor analysis, finalization of scoring algorithm) FDA PRO Guidance (2009); Patrick et al. Value in Health (14):967-977

Summary PROs can be used to capture the patient’s perspective on the signs, symptoms and impacts associated with a disease or condition. PRO measures can be used to assess treatment effect in clinical trials In addition to being used in clinical trials, PROs can be used in real-world studies (e.g. surveys, registries, observational studies) as well in clinical practice and in quality measures Qualitative interviews ensure the selection of relevant concepts and patient understanding. Need to ensure the appropriate context of use It is important to establish content validity before other measurement properties are evaluated. Qualitative (unstructured) data are analyzed using different methods from structured data (including PRO data collected in studies) Well established methods for analyzing qualitative data Mixed Methods approaches can combine both qualitative and quantitative data as part of instrument development

Questions? steven.i.blum@gsk.com