Clarifying uncertainties and needs for monitoring and evaluation Are there important uncertainties that should be addressed prior to making a decision?

Slides:



Advertisements
Similar presentations
Agency for Healthcare Research and Quality (AHRQ)
Advertisements

Introduction to Monitoring and Evaluation
Susan Boynton, VP, Global Regulatory Affairs, Shire
Donald T. Simeon Caribbean Health Research Council
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
Systematic Review of Literature Part XIX Analyzing and Presenting Results.
Reading the Dental Literature
Measuring and Monitoring Program Outcomes
Summarising findings about the likely impacts of options Judgements about the quality of evidence Preparing summary of findings tables Plain language summaries.
Estimation and Reporting of Heterogeneity of Treatment Effects in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare.
Incorporating considerations about equity in policy briefs What factors are likely to be associated with disadvantage? Are there plausible reasons for.
What is a sample? Epidemiology matters: a new introduction to methodological foundations Chapter 4.
Teaching and Testing Pertemuan 13
Risk Management and Strategy Prioritisation Intelligence Step 8 - Risk Management and Strategy Prioritisaiton Considering the risks associated with action.
Non-Experimental designs: Developmental designs & Small-N designs
PPA 502 – Program Evaluation Lecture 2b – Evaluability Assessment.
11-1 Copyright  2006 McGraw-Hill Australia Pty Ltd Revised PPTs t/a Auditing and Assurance Services in Australia 3e by Grant Gay and Roger Simnett Slides.
Evaluation. Practical Evaluation Michael Quinn Patton.
Understanding Sampling Non Probability Sampling Lecture 13 th.
Health Systems and the Cycle of Health System Reform
Indicator Baseline Target Milestones PERFORMANCE MEASUREMENTS.
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
How to Develop the Right Research Questions for Program Evaluation
RESEARCH DESIGN.
RESEARCH DESIGNS FOR QUANTITATIVE STUDIES. What is a research design?  A researcher’s overall plan for obtaining answers to the research questions or.
Are the results valid? Was the validity of the included studies appraised?
Environmental Impact Assessment (EIA): Overview
Determining Sample Size
RESEARCH A systematic quest for undiscovered truth A way of thinking
Evaluation Test Justin K. Reeve EDTECH Dr. Ross Perkins.
Designing an Evaluation of the Effectiveness of NIH’s Extramural Loan Repayment Programs.
Performance Measurement and Analysis for Health Organizations
Sina Keshavaarz M.D Public Health &Preventive Medicine Measuring level of performance & sustaining improvement.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Slide 1 D2.TCS.CL5.04. Subject Elements This unit comprises five Elements: 1.Define the need for tourism product research 2.Develop the research to be.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
Charteredaccountants.com.au/training Fundamentals of Auditing in 2007 Chartered Accountants Audit Conference ASA 530 – Audit Sampling and Other Means of.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Course on Data Analysis and Interpretation P Presented by B. Unmar Sponsored by GGSU PART 2 Date: 5 July
Evaluating a Research Report
1 MODEL ACADEMIC CURRICULUM MODULE 13 Assessing and Evaluating Responses.
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
“Evidence-Informed Health Policymaking Teaching Evidence Assimilation for Collaborative Healthcare New York Academy of Medicine, 7 August 2014 Andy Oxman,
Deciding how much confidence to place in a systematic review What do we mean by confidence in a systematic review and in an estimate of effect? How should.
STRATEGIC ENVIRONMENTAL ASSESSMENT METHODOLOGY AND TECHNIQUES.
Market research for a start-up. LEARNING OUTCOMES By the end of this lesson I will be able to: –Define and explain market research –Distinguish between.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
 Welcome, introductions  Conceptualizing the evaluation problem  stakeholder interests  Divergent and convergent processes  Developing evaluation.
Implementation and follow up Critically important but relatively neglected stages of EIA process Surveillance, monitoring, auditing, evaluation and other.
1 Results-based Monitoring, Training Workshop, Windhoek, Results-based Monitoring Purpose and tasks Steps 1 to 5 of establishing a RbM.
Evaluation design and implementation Puja Myles
3-1 Copyright © 2010 Pearson Education, Inc. Chapter Three Research Design.
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Types of Studies. Aim of epidemiological studies To determine distribution of disease To examine determinants of a disease To judge whether a given exposure.
Michigan Assessment Consortium Common Assessment Development Series Module 16 – Validity.
©2012 Prentice Hall Business Publishing, Auditing 14/e, Arens/Elder/Beasley Materiality and Risk Chapter 9.
Capacity Building For Program Evaluation In A Local Tobacco Control Program Eileen Eisen-Cohen, Maricopa County Tobacco Use Prevention Program Tips for.
Slide 7.1 Saunders, Lewis and Thornhill, Research Methods for Business Students, 5 th Edition, © Mark Saunders, Philip Lewis and Adrian Thornhill 2009.
©2012 Pearson Education, Auditing 14/e, Arens/Elder/Beasley Considering Materiality and Audit Risk Chapter 9.
Fundamentals of Monitoring and Evaluation
Materiality and Risk Chapter 8.
Chapter Three Research Design.
Chapter 7 Selecting Samples
MONITORING AND EVALUATION IN TB/HIV PROGRAMS
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

Clarifying uncertainties and needs for monitoring and evaluation Are there important uncertainties that should be addressed prior to making a decision? What should potentially be monitored and how? What should potentially be evaluated and how?

It is unrealistic to assume that we can predict the impacts of a health policy with certainty Rigorous evaluation is often lacking Policymakers must still make decisions Consequently, some degree of monitoring and evaluation is almost always warranted Providing data will be used to –Reduce important uncertainties about implementing an option –Identify the need for corrective actions, if things are not going as planned –Support continuation of the option, if things are going as planned

“Both politically, in terms of being accountable to those who fund the system, and also ethically, in terms of making sure that you make the best use possible of available resources, evaluation is absolutely critical.” Julio Frenk (former Minister of Health of Mexico ) Dr Julio Frenk, Minister of Health, Mexico

Are there uncertainties that should be addressed prior to making a decision? It may be prudent to undertake an evaluation before full implementation for options that are promising, but –Entail large investments or –Important risks of adverse effects Undertaking a rigorous evaluation, for example as a pilot study, can potentially –Prevent wasting resources –Reduce the risk of unintended impacts –Ensure that desired impacts will be achieved

Caution is warranted Whenever judgments about the impacts of an option are based on –Theories –Surrogate outcomes –Limited observational studies or –Inadequate randomized impact evaluations The advantages of undertaking an evaluation prior to fully implementing an option are likely to outweigh the disadvantages when there is a substantial risk of –The benefits not outweighing the harms and costs or –A potential need to make important modifications However, this depends on an evaluation being –Possible to complete within an acceptable timeframe –Feasible –Affordable

Advantages and disadvantages of undertaking an evaluation prior to fully implementing an option Findings of the evaluation The balance between the benefits, harms and costs Favour the policyDo not favour the policy Advantages Potential for improvements prior to implementation Possible to stop implementation and to reconsider options Disadvantages Delay in implementation

Describing uncertainties that potentially should be addressed prior to full implementation in a policy brief Depending on how compelling the arguments are for undertaking an evaluation, such options can be described by Flagging the uncertainties and ensuring that consideration is given to undertaking a prior evaluation Describing full implementation and undertaking a prior evaluation as two different options Describing the option as undertaking an evaluation –Indicating that it should not be fully implemented without first undertaking an evaluation

Questions or comments about addressing uncertainties before making a decision?

What should potentially be monitored and how? In describing options in a policy brief, consideration should be given to the need for monitoring The extent to which monitoring is necessary and what should be monitored depends on how much uncertainty there is regarding the inputs, activities, outputs and impacts for an option The extent to which specific types of uncertainty should be described in a policy brief depends on the –Degree of uncertainty –Potential for monitoring to reduce important uncertainties –Feasibility of monitoring –Ability and preparedness to act on the results of monitoring

Types of indicators, reasons for monitoring them, and potential actions IndicatorsDefinitionsReasons for monitoringPotential actions InputsFinancial, human and material resources Uncertainty about the magnitude of the resources that are required Adjustments to the budget ActivitiesWays in which inputs are used Uncertainty about how resources will be used or to ensure that they are used as intended Changes in how resources are allocated or used OutputsServices provided, changes or short-term effects of activities Uncertainty about the immediate effects of activities Changes in the budget, how resources are used, the activities that are undertaken, or a decision to continue or discontinue implementing the option ImpactsDesirable and undesirable effects that are important to those affected Uncertainty about the extent to which outputs accurately and completely reflect the likely impacts of the option Changes in the services that are provided or how they are provided, or a decision to continue or discontinue implementing the option or to undertake an impact evaluation

For uncertainties for which there are compelling reasons for monitoring Consideration should be given to which specific indicators to use I.e. specific types of data that could potentially be collected to provide a reliable means to measure inputs, activities, outputs or impacts This may be Simple counts of events –E.g. the number of vaccinations conducted within a set period of time Constructs based on various data sources –E.g. the proportion of all children being fully immunised before their first birthday The choice of indicators for monitoring should be based on their characteristics, including the extent to which They are acceptable Valid, reliable and consistent data are available They are affordable

Why consider the choice of indicators in a policy brief? To ensure that due consideration is given to important uncertainties and how to address these To inform discussions about the need for monitoring an option during the policy development process The amount of detail that can be included in a policy brief regarding monitoring is limited –It may be desirable to include key considerations in the text and more detailed considerations, when relevant, as an appendix The risk of not incorporating these considerations is that monitoring will be considered as an afterthought, rather than as an integral component of the option

Worksheet for considering needs and alternatives for monitoring implementation of an option

Questions or comments about monitoring?

What should potentially be evaluated and how? Although monitoring can sometimes be used to reduce uncertainties about the impacts of an option, often this will not be adequate “Evaluation” is sometimes used interchangeably with “monitoring” But evaluation, or more specifically ‘impact evaluation’, usually suggests an assessment of whether observed changes in the impacts that are measured can be attributed to the implementation of the option

Monitoring does not necessarily indicate whether an option has had an impact on the indicators that have been measured Indicators will almost always be influenced by factors other than the option being implemented This makes it extremely difficult to determine what caused any observed changes If monitoring reveals, for example, that performance improved over time, implementation of the option may not be the only causal factor and may, in fact, not have caused the change at all

Impact evaluations should, so far as possible, measure all desirable and undesirable outcomes that are important to those affected for which there is substantial uncertainty I.e. where the results of the evaluation could conceivably affect a decision about whether implementation of the option is worthwhile

An impact evaluation must estimate what would occur in the absence of implementing an option and compare this to an estimate of what happens with implementing the option Ideally an evaluation should be built into an option as it is designed –Which means that the evaluation should be planned as early as possible Policy briefs can help to ensure that appropriate consideration is given to the need for impact evaluations –And when and how they should be conducted

Evaluation methods and findings should be as reliable as possible Attributing a change to the implementation of an option requires a comparison The compared groups should be as similar as possible in order to rule out influences other than the option being evaluated The most effective way of doing this is by random allocation Randomised trials can be conducted –As pilot projects before a programme is introduced at a national level or –In parallel with full scale implementation For example by randomly allocating the districts in which the option will be implemented first and using districts where implementation is delayed as the comparison

Randomised trials are not always feasible Alternative approaches include Interupted time series analyses Controlled before-after studies

Interupted time series analyses Can be used when data are collected from multiple time points before and after implementation of the option Advantages –May be feasible and relatively easy to conduct if the necessary data are available –A control group is not needed –Controls for trends over time and variability in indicators over time Disadvantages –Influences other than the option being evaluated may affect the observed changes

Controlled before-after studies Changes before and after implementation of an option are compared with observed changes during the same time period in areas where the option was not implemented –E.g. in neighbouring districts or countries Advantages –May sometimes be the only practical option E.g. for options where randomisation is not feasible for practical or political reasons and where it is not feasible to collect data at multiple time points Disadvantages –Rarely provide reliable estimates of impacts Known or unknown differences between the groups that are compared may exert more influence on the outcomes that are measured than the option that is being evaluated Consequently, it is generally difficult, if not impossible to confidently attribute observed changes (or a lack of change) to the implementation of an option

Other study designs may sometimes be used to assess the impacts of health policy options However, some designs are typically not feasible for assessing the impacts of health policies –E.g. cohort studies and case-control studies Other designs rarely provide compelling evidence –Before-after studies –Historically controlled studies –Cross-sectional studies Qualitative studies (as well as other quantitative designs such as surveys) can provide valuable evidence to explain how an option worked or why it did or did not work –But they do not generate data that can be used to estimate the effect of an option beyond the perceptions of those who were interviewed or surveyed

Rigorous evaluations can be expensive to conduct Budget, time or data constraints limit the ability to undertake rigorous impact evaluations These constraints can impact on the reliability of impact evaluations in a number of ways, including: –Threats to the overall validity of the results; e.g. due to Insufficient planning or follow-up A lack of baseline data Reliance on inadequate data sources Selection of an inappropriate comparison –Inadequate samples; e.g. due to Selection of a convenience sample that is not representative A sample size that is too small Inadequate attention to contextual factors

It may be possible to address budget, time and data constraints For example, by starting planning early and reducing the cost of data collection However, for it to be worthwhile to undertake an impact evaluation, it is important to ensure that the evaluation will provide reliable information Thus, before implementing an evaluation, an assessment should be made as to whether an adequate evaluation is possible If it is not, an assessment needs to be made as to whether to implement the option without evaluating it, given the nature of the uncertainty about its potential impacts

Evaluability assessment Several models have been described for assessing the extent to which an adequate evaluation is possible, which is sometimes referred to as “evaluability assessment” One purpose of an evaluability assessment is to determine whether the intended objectives of an evaluation can be achieved with the available resources and data and within the specified evaluation time horizon The focus is on whether the particular level or levels of government (or non-governmental organisations) are in a position to begin collecting, analyzing, and reporting evaluation data so as to inform the decision-making process

A policy option is evaluable to the extent that Goals and priority information needs are well defined Goals are plausible Relevant data can be obtained at a reasonable cost The intended users of the evaluation results are able to agree on how they will use the information

Worksheet for considering needs and alternatives for evaluating the impacts of an option

Decisions about proceeding with an option when there are important uncertainties about its impacts and it is not evaluable Depend on judgements about The size of the problem What the alternatives are The expected impacts of the programme The extent of uncertainty about those impacts Values Costs Feasibility Acceptability

“We will serve the public more responsibly and ethically when research designed to reduce the likelihood that we will be misled by bias and the play of chance has become an expected element of professional and policy making practice, not an optional add-on.” Iain Chalmers

“If you are poor, actually you need more evidence before you invest, rather than if you are rich.”

Questions or comments about evaluation?