Presentation is loading. Please wait.

Presentation is loading. Please wait.

Program Evaluation in Health Care

Similar presentations


Presentation on theme: "Program Evaluation in Health Care"— Presentation transcript:

1 Program Evaluation in Health Care
Vicky Stergiopoulos, MSc, MD, MHSc Onil Bhattacharyya, MD, PhD

2 Purpose of Evaluation The purpose of evaluation includes accountability, informing decisions on resource allocation, program and policy improvement. Assessing relevance and performance Rigorous evaluation can build a knowledge base of what works that is generalizable. Treasury Board of Canada Secretariat. Policy on Evaluation. 2009

3 Types of Evaluation Evaluation of need Developmental evaluation
Evaluation of implementation and process Evaluation of outcomes / effectiveness

4 The Realist Approach to Evaluation
“What works for whom and under what circumstances”. “Realists do not conceive that programs work, rather it is the action of stakeholders that makes them work…the causal potential of an initiative takes the form of providing reasons and resources to enable program participants to change.” Implications for involving stakeholders including program recipients, program staff and funders in program planning and evaluation Pawson &Tilley, 1997

5 CATCH-ED: A Brief Case Management Intervention for Frequent ED Users
An adaptation of the Critical Time Intervention model in the Canadian context Sponsored by the Toronto Central LHIN and the Toronto Mental Health and Addictions Acute Care Alliance A multi-organizational intervention spanning: 6 General and 1 Specialty Hospitals 4 Community Mental Health and Addiction Agencies 4 Community Health Centers 1 Community Agency providing peer support Evaluated at the Centre of Research on Inner City Health, St. Michael’s Hospital

6 CATCH ED -Integrated Care Framework
1. Proactive identification 2. At-right-time contact and connection 3. Navigation and connection to service 4. Tailoring of care pathways and integration of care Alternatives to ED when in crisis Primary and psychiatric care MHA counseling Low-barrier individual/group-based services Other determinants of health Low-barrier access transitional case management Specialized Mobile Responsive Peer supports Social support Advocacy Transition to longer- term supports as needed Coordination of care delivery Multi-d care team Integrated care plan Tailoring of care pathways Low barrier re-entry Proactive identification in the hospital (ED, inpatient) Proactive identification in the community 5. Supporting structures and mechanisms Partnerships and protocols “When all else fails” processes Ongoing monitoring and evaluation Consistent, continuous communication

7 CATCH-ED Phases Phase 1: Engagement and Goal-Setting (mo. 1-2)
7 Phase 1: Engagement and Goal-Setting (mo. 1-2) Phase 2: Bridging to Community (mo. 2-4) Phase 3: Transfer of Care (mo. 4-6) Meet in hospital whenever possible First contact – 24 hrs; first meeting – 48 hrs Rapport-building, engagement Rapid assessment of pressing needs, strengths, resources, and reasons for ED use Practical needs assistance Individualized, focused treatment/support plans 2-3 contacts per week Assertive outreach Continued practical needs assistance Referrals to services Continued focus on only the most critical areas Strong emphasis on building and testing connections to longer- term supports Reduction in service intensity to 1-2 contacts per week Transfer of care to new support network Focus on assessment of the strength and functioning of the support system Reduction in service intensity to <1 contact per week Once confident in hand off, patients are discharged, with an open-door return policy

8 Steps in Evaluation – Step 1
Understanding the program and its components: What are the anticipated outcomes? What is the underlying program theory? Is it supported by an evidence base? What is the programs anticipated timeline of impact?

9 Steps in Evaluation- Step 2
Evaluation design How does the complexity of the intervention impact on the evaluation design? Implementation of the evaluation – data collection and analysis. Interpretation of findings, reporting, communicating to stakeholders.

10 Evaluation Design Study design should follow study purpose or function: Developmental evaluation Implementation and process evaluation Outcome evaluation Research oriented vs internal program oriented

11 “Not all forms of evaluation are helpful
“Not all forms of evaluation are helpful. Indeed many forms of evaluation are the enemy of social innovation” Patton, 2008

12 Different Contexts, Different Evaluation
Innovation Context / Developmental Evaluation: Initiative is in development Evaluation is used to provide feedback on the creation of the initiative Mature Contexts Formative: Evaluation is used to help improve the initiative Implementation / process: Evaluation examines if initiative implemented as intended and/or meeting targets Outcome: Evaluation is used to assess impact of the initiative

13 Developmental Evaluation Niches
Pre-formative Ongoing development of existing model Adaptation to a new context Sudden change or crisis Major systems change

14 Developmental Evaluation Goals
Framing the intervention Testing quick iterations Tracking developments Surfacing tough issues

15 Implementation and Process Evaluation
Evaluation involves checking the assumptions made while the program was being planned. The extent to which implementation has taken place. The nature of people being served. The degree to which the program operates as expected. Do people drop out of the program?

16 Examples of Process/Implementation Evaluation Questions
Are there inconsistencies between planned and actual implementation? What is working /worked well in terms of program implementation? What challenges and barriers have emerged as the programs have been implemented? What factors have helped implementation?

17 Examples of Process/Implementation Evaluation Questions
What issues have arisen between stakeholder groups and how have they been resolved? What do participants say is helpful and not helpful about the program? What are they key factors in the program’s environment that are influencing program implementation? Structures, relationships, resources?

18 Getting Results IMPLEMENTATION Effective NOT Effective Actual Benefits
Inconsistent; Not Sustainable; Poor outcomes Effective INTERVENTION Poor outcomes; Sometimes harmful NOT Effective Poor outcomes Institute of Medicine, 2000; 2001; 2009

19 Outcome Evaluation What are the program results/ effects?
Is the program achieving its goals? Are program recipients performing well? What constitutes a successful outcome?

20 Selecting the Outcome Evaluation Design
Design options: Pre-experimental Quasi-experimental Experimental Consider threats to validity: If a change occurs can the program take credit? (internal validity) To whom can results apply? (external validity or generalizability)

21 Design Options I: Pre-experimental
Single group design (no control). Collect information at only one point in time, compare to expected outcome without the program. Collect information at two points in time (pre-post design) Less intrusive and expensive, less effort to complete. More threats to internal validity.

22 Design Options II: Quasi-experimental
Naturally occurring control conditions: Collecting information at additional times before and after the program. Not equivalent control group. Observing other dependent variables. Combining design types to increase internal validity. Main threat to internal validity is differences in two groups (selection threat).

23 Design Options III: Experimental Designs
Randomized control trials. Objections to experiments: “Don’t experiment on me!” “We already know what’s best”. “Experiments are just too much trouble”. When to conduct experiments: When stakes are high When there is controversy about program effects When policy change is desired When demand is high

24 Measurement and Data Collection
What needs to be measured? What are the most appropriate indicators? How will we collect the data? What resources are required for data collection and analysis?

25 Data Sources for Evaluation
Intended beneficiaries of the program Program participants Community indices Providers of services Program staff Program records Observers

26 Selecting Measures Sources of data for evaluation
Which sources should be used? Good assessment procedures Use multiple variables. Use variables relevant to information needs. Use valid and reliable measures. Use measures that can detect change over time. Use cost effective measures.

27 Quantitative and Qualitative Approaches
Mixed methods provide richer information. Quantitative methods give breadth of understanding. Qualitative methods provide depth of understanding. Together, the different methods help better explain whether, how and why the intervention works in a given context.

28 Use of Qualitative Methods
Before a trial To develop and refine the intervention To develop or select appropriate outcome measures To generate hypotheses for examination During a trial To examine whether the intervention was delivered as intended To identify key intervention ingredients To explore patients’ and providers’ experience of the intervention After a trial To explore reasons for findings To explain variations in effectiveness within the sample To examine the appropriateness of the underlying theory Lewin et al, 2009

29 CATCH-ED Evaluation

30 CATCH-ED Evaluation Before the trial During the trial
Developmental Evaluation During the trial Evaluation of process implementation Process Measures Narrative interviews and focus groups Direct observation Outcome evaluation TBD

31 Implementation Evaluation Findings
Barriers Facilitators Poor identification and referral processes Incomplete understanding of drivers of ED use Decentralized structure Long wait times for other services Training and technical assistance Partnership with local health integration network Agency commitment ED presence of case managers Training and Technical Assistance

32 Evaluation Questions During the Trial
Who are the clients being served by the program? Demographic and clinical characteristics: survey questionnaires, program records How do they experience continuity of care and the working relationship / alliance with their case manager? Narrative interviews, survey questionnaires Is the intervention being delivered as intended? Direct observation, interviews, monthly reports by case managers What is the effectiveness of the program in decreasing ED use and improving health outcomes? RCT

33 Using CATCH-ED Program Records
How often were patients seen? How many times were patients seen? Were patients referred to appropriate services? Was there warm hand off to other services? What was the appropriateness and comprehensiveness of services offered?

34 Your questions? What are your main challenges?

35 MCP Group Coaching Sessions
Interested teams can participate in monthly coaching activities Collaborative approach – teams can share learnings to support one another Role of coaches: facilitate team meetings, provide feedback, connect to resources Teams can help each other in implementing, evaluating and building capacity for integration of care for medically complex patients

36 Coaching Process Interested teams will be contacted by a coach to determine topics for subsequent calls Based on interests shared by multiple teams, the topic/focus of group coaching sessions will be decided BRIDGES will subsequently send an invitation outlining the topic and objectives to interested MCP teams for a teleconference coaching session Coaches will be available to facilitate coaching sessions for the duration of the initiative

37 MCP Preconference: Coaching Workshop
Coaching Session Facilitator: Patricia O’Brien, Manager, Quality Improvement Program, Department of Family & Community Medicine, University of Toronto Coaching Workshop Theme: Coaching for sustainability and spread The role of coaching support in encouraging improvement and change Workshop Format: Overview of the coaching model Breakout sessions with coaches modelling best practices

38 References Posavac EJ & Carey RG. Program Evaluation: Methods and Case Studies (6th Ed.). Prentice Hall, New Jersey, 2003. Patton, MQ. Utilization Focused Evaluation (4th Ed). Thousand Oaks, CA: Sage, 2008. Damschroder LJ et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science 2009:4:50 Pawson R. Evidence-based policy: A realist perspective. Sage, 2006. Pawson R & Tilley N. An introduction to scientific realist evaluations. In E. Chelimsky & WR Shadish (Eds.) Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage, 1997, pp Renger R & Titcomb A. A three step approach to teaching logic models. American Journal of Evaluation, 2002;23: Treasury Board of Canada Secretariat. Policy on Evaluation. 2009 Lewin S, Glenton C, Oxman AD. BMJ 2009;339;b3496


Download ppt "Program Evaluation in Health Care"

Similar presentations


Ads by Google