Presentation is loading. Please wait.

Presentation is loading. Please wait.

What Can Different Types of Evaluation Add to Individual-Focused Behavior Change Interventions? Margaret Handley, PhD MPH Center for Vulnerable Populations.

Similar presentations


Presentation on theme: "What Can Different Types of Evaluation Add to Individual-Focused Behavior Change Interventions? Margaret Handley, PhD MPH Center for Vulnerable Populations."— Presentation transcript:

1 What Can Different Types of Evaluation Add to Individual-Focused Behavior Change Interventions? Margaret Handley, PhD MPH Center for Vulnerable Populations Department of Epidemiology & Biostatistics and Department of Medicine

2 Homework?

3 FINAL PROJECT FOR EPI 246 DUE June 3, 2010 5pm For the final project for the course, please prepare a 3-5 page document and an abstract (250-350 words) that draws on the materials and lectures from the course. You may choose to develop one of the following in an area that you are currently involved with or anticipate being involved with in the future: 1.Text for a grant proposal for a behavior change intervention or to describe behavior, using theory-informed approaches. 2.Text for a program you would develop, its description, setting and rationale, again linked to materials in class related to theory, tools or programs. 3. The rationale and sample materials for a training you would develop that would lead to behavior change, and how these programs would link to theory. Please be creative but do link the work to the theories, frameworks and models that have been presented in class, and include at least 3-4 citations from the literature. Also include at least one figure/diagram that relates to the course.

4 Outline 1. Evaluation Approaches and IDS Relevance 2. DIME and RE-AIM Frameworks 3. Examples Tai-Chi Intervention- RE-AIM Fidelity vs Flexibility in Practice-Based Research CBPR and Logic Model for Cancer Screening

5 Program Evaluation Can Help To… Measure intervention’s effectiveness on targeted process or outcome measures. Determine most efficient and effective strategy for implementation of intervention Verify the mechanisms through which you believe your intervention is working Guide/support replication in other settings Measure fidelity and adaption Align goals with system or stakeholder goals Determine cost-effectiveness & priority

6 How To Conceptualize Evaluation Outcome Evaluation Process Evaluation Resource Evaluation Relevant Perspectives Clinic/ Organization Public Health Policy

7 How do we get what we really want? “Program evaluation is the systematic collection of data related to a program’s activities and outcomes so that decisions can be made to improve efficiency, effectiveness or adequacy” - CDC, Practical Evaluation of Public Health Programs ? http://www.cdc.gov/eval/evalcbph.pdf

8 Goals of Evaluation Intuitive GoalEvaluation Concept Am I making a difference? What have we done? How well have we done it? What is the value of it? What had biggest impact? What could we get rid of? How effective have we been, and for whom? What are we going to do now that we have this info? Describe/Summarize Quality; Importance; Accountability; Cost- effectiveness Effectiveness; Social equity Evidence-based planning; Outcomes-focusing

9 A Range of Evaluation Needs in IDS Research We know research implementation is highly dependent on local context and involved inter- related interactions across multiple groups, but we focus on measure of individual behavior change in most evaluations…. Which does not give one much to go on for successful replication or on what are key pitfalls

10 Framework: DIME and Translating research into Practice (TRIPLaB) Hanburry et al, Implementation Sciences 2010 Designed for UK National Health Service Program on TRIP and implementation – Collaborations for applied research (University and NHS partnerships) 1.Selecting the innovation=Develop 2.Implement in Local Settings 3.Evaluate >>> then conduct large RCT etc.

11 Develop, Implement and Evaluate (DIME) or Build it with the evaluation in mind Hanburry et al, Implementation Sciences 2010 3Phases 1.Selecting the innovation=Develop 1.Implement in Local Settings 1.Evaluate

12 Develop, Implement and Evaluate (DIME) Develop Stakeholder consult to id innovations Conjoint analysis survey Mapping against theory-based characteristics Explore team/social network culture Synthesize/ranking Implement Review acceptability to stakeholders in their local context Review of ‘policy’ cost effectiveness of different strategies Delivered to relevant groups Evaluate Pre-Post test change in outcomes Interrupted time series Cost- effectiveness of implementation Hanburry et al, Implementation Sciences 2010

13 Develop, Implement and Evaluate (DIME) Hanburry et al, Implementation Sciences 2010 Developing/Selecting the Innovation Phase Stakeholder consult to id innovations to target (e.g qualitative interviews/focus groups >Maternal mental health prioritization w/in MCH) Conjoint analysis survey of stakeholders (e.g. trade- offs of mix of attributes for scenarios, such as likely cost/patient and local expertise to implement, data resources, preferences gets ranked by stakeholders)

14 DIME cont. Hanburry et al, Implementation Sciences 2010 Mapping against theory-based characteristics resulting in scoring of the different possibilities (e.g. strength of evidence for innovations eg self-efficacy) ‘Diagnostic analysis’ w/semi-quantitative surveys to see if the innovation proposed looks good regarding the local social networks and teams/networks/communication channels Synthesize to choose the innovation having been prioritized as a priority area for stakeholders, conjoint survey ranks high, and maps to evidence and practical considerations.

15 DIME cont. Implementation Phase Piloting different strategies Review of ‘policy’ cost effectiveness of different strategies – costs, practical factors for each option Detailing of components for fidelity and uptake (eg details of numbers and types of sessions) Deliver to relevant groups

16 DIME cont. Evaluation Phase Interrupted time series (e.g. snap-shots to examine the impact on processes of care and outcomes) Pre-Post test change in outcomes (surveys ind behavior change measures linked to theory-based constructs, team characteristics, qualitative doer/non- doer analyses>>’black-box’ evaluation) Cost-effectiveness of implementation (micro costs and extent of behavior change achieved to arrive at implementation cost-effectiveness)

17 RE-AIM TO HELP PLAN, EVALUATE, AND REPORT STUDIES R IncreaseReach E IncreaseEffectiveness A IncreaseAdoption I IncreaseImplementation M IncreaseMaintenance Glasgow, et al. Ann Behav Med 2004;27(1):3-12

18 PURPOSES OF RE-AIM To broaden the criteria used to evaluate programs to include external validity To evaluate issues relevant to program adoption, implementation, and sustainability To help close the gap between research studies and practice by  Informing design of interventions  Providing guides for adoptees  Suggesting standard reporting criteria

19 RE-AIM Evaluation Qs AreaEvaluation question to include Reach Efficacy or Effectiveness Adoption What percent of potentially eligible participants a) were excluded, b) took part and c) how representative? What impact on a) all participants who began the program; b) on process intermediate, and primary outcomes; and c) on both positive and negative (unintended), outcomes including quality of life? What percent of settings and intervention agents within these settings (e.g., schools/educators, medical offices/physicians) a) were excluded, b) participated and c) how representative were they?

20 RE-AIM Evaluation Qs AreaEvaluation question to include Implementation Maintenance What percent of settings and intervention agents within these settings (e.g., schools/educators, medical offices/physicians) a) were excluded, b) participated and c) how representative were they? Were intervention components delivered as intended? What were the long-term effects b) What was the attrition rate; were drop-outs representative; were different intervention components continued? b) How was the original program modified?

21 Example: Tai Chi Intervention in Community- Based Falls Prevention Program AreaProgram Evaluation Measure Reach Effectiveness Adoption Implementation Maintenance Reach :Those who qualified for program divided by those who responded to the promotion materials Representativeness: Demos of those who were in program compared to those coming to center, using admin data. Change in functional status measures, QOL measure =SF12 Proportion of centers approached who agreed to participate Did the trainers follow key elements of the protocol, adherence to plan, frequency of sessions, inds. Doing program at home, attendance level sustained Plan to continue/actual continuation post-trial

22 Example: P4H Evaluation AreaProgram Evaluation Measure Reach Adoption Implementation How did accommodating patients circumstances change the reach? How were essential intervention components maintained, and how were protocols changed during implementation? How did accommodating personnel costs affect implementation processes? How did research team working relationships impact uptake?

23 Example: CBPR Approach to Evaluating a Program to Decrease Cancer Disparities in Southern US Problem: cancer disparities between Af. Americans- whites Goal: Improve early cancer detection and preventive behaviors Evaluation Methods: Use a logic model and CBPR process to develop, implement and evaluate interventions that “capture the spirit of change”while maintaining measureable outcomes Outcomes at multiple levels: Process Evaluation-inputs and planning strategies Impact Evaluation-immediate effects assessed Outcome Evaluation-med. And long term outcomes assessed

24 Example: CBPR Approach to Evaluating a Program to Decrease Cancer Disparities in Southern US


Download ppt "What Can Different Types of Evaluation Add to Individual-Focused Behavior Change Interventions? Margaret Handley, PhD MPH Center for Vulnerable Populations."

Similar presentations


Ads by Google