Technical Assistance on Evaluating SDGs: Leave No One Behind

Slides:



Advertisements
Similar presentations
Water for a food-secure world IFAD agricultural water management investments in “challenging contexts”: IFAD context, commonalities across countries, &
Advertisements

Mywish K. Maredia Michigan State University
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
IPDET Lunch Presentation Series Equity-focused evaluation: Opportunities and challenges Michael Bamberger June 27,
Chapter 3 Preparing and Evaluating a Research Plan Gay and Airasian
PPA 503 – The Public Policy Making Process
At the end of this module, participants should have a better understanding of the following : Elements of Gender Mainstreaming Basics of Gender Analysis.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Needs Analysis Session Scottish Community Development Centre November 2007.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Skunk Works Evaluation Tools: How do we know if we are having an impact?
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
Evaluating a Research Report
Overview of Evaluation Designs. Learning objectives By the end of this presentation, you will be able to: Explain evaluation design Describe the differences.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
STRATEGIC ENVIRONMENTAL ASSESSMENT METHODOLOGY AND TECHNIQUES.
HPTN Ethics Guidance for Research: Community Obligations Africa Regional Working Group Meeting, May 19-23, 2003 Lusaka, Zambia.
Monitoring and evaluation Objectives of the Session  To Define Monitoring, impact assessment and Evaluation. (commonly know as M&E)  To know why Monitoring.
Evaluation What is evaluation?
Chapter 6 Selecting a Design. Research Design The overall approach to the study that details all the major components describing how the research will.
Module 6 Key Concepts: Gender equality and sustainability/resilience Technical Assistance on Evaluating SDGs: Leave No One Behind EvalGender+ Network together.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Introduction Social ecological approach to behavior change
Technical Assistance on Evaluating SDGs: Leave No One Behind
Technical Assistance on Evaluating SDGs: Leave No One Behind
Program Evaluation ED 740 Study Team Project Program Evaluation
Planning my research journey
DATA COLLECTION METHODS IN NURSING RESEARCH
Evaluation: For Whom and for What?
Gender-Sensitive Monitoring and Evaluation
GENDER TOOLS FOR ENERGY PROJECTS Module 2 Unit 2
Gender-Sensitive Monitoring and Evaluation
Evaluation Plan Akm Alamgir, PhD June 30, 2017.
Module 2 The SDG Agenda: No-one left behind
Module 9 Designing and using EFGR-responsive evaluation indicators
Food and Agriculture Organization of the United Nations
LO1 - Analyse the impact and influence which the macro environment has on an organization and its business strategies 1. P1 Applying appropriate frameworks,
April 21 Francesca Recanatini, WBI
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
Module 1: Introducing Development Evaluation
Impact evaluations at IFAD-IOE
Community program evaluation school
Strategic Planning for Learning Organizations
HEALTH IN POLICIES TRAINING
Gender statistics in Information and Communication Technology for Women’s Empowerment and Gender Equality Dorothy Okello, Annual.
GENDER STATISTICS IN INFORMATION AND COMMUNICATION
TSMO Program Plan Development
Developing culturally responsive and gender transformative evaluation curriculum: Reflections on South to South collaboration Madri Jansen van Rensburg,
Multi-Sectoral Nutrition Action Planning Training Module
Gender Equality Ex post evaluation of the ESF ( )
Module 7 Key concepts Part 2: EVALUATING COMPLEX DEVELOPMENT PROGRAMS
Chapter Eight: Quantitative Methods
Module 5 SDG follow-up and review mechanisms
Session 4: SDG follow-up and review mechanisms
Monitoring and Evaluation of Postharvest Training Projects
Session 2: The SDG Agenda: No-one left behind
RESEARCH METHODS Lecture 3
What are Mixed Methods? Donna M. Mertens, Ph.D., Professor Emeritus, Gallaudet University Economic and Social Research Council National Centre for Research.
REAL (working with Singizi) Presentation to NSA 3rd/4th August
Regulated Health Professions Network Evaluation Framework
Session 1.4. The EFSA Analysis Plan
RRI Baseline and Endline
SUSTAINABLE MICRO-FINANCE for WOMEN’S EMPOWERMENT
Session 7 Some Key Concepts
Environment and Development Policy Section
Monitoring and Evaluating FGM/C abandonment programs
Issues of Technology Needs Assessment for Climate Change
Integrating Gender into Rural Development M&E in Projects and Programs
The Global Evaluation Agenda
Presentation transcript:

Module 10 Rapid, non-technical review of EFGR-responsive evaluation methodologies Technical Assistance on Evaluating SDGs: Leave No One Behind EvalGender+ Network together with UN Women, UNEG and EvalPartners

Presentation developed by Michael Bamberger and Asela Kalugampitiya based on Chapter 4 of ”Evaluating the Sustainable Development Goals within a “No-one left behind” lens through equity-focused and gender-responsive evaluations”

Outline Why it is important for non-evaluation specialists to understand the basic principles of evaluation design Evaluation levels and types The main evaluation methodologies The integration of EFGR evaluations into the SDGs

1. Why it is important for non-evaluation specialists to understand the basics of evaluation methodology There is no “best” evaluation design Different designs are required to answer different evaluation questions Many evaluators have a preferred evaluation design that they will often recommend even when it is not the most appropriate All evaluation designs have strengths and weaknesses Consequently evaluation managers and other stakeholders have an important role in working with evaluators to select the best design for a particular evaluation

2. Evaluation: Levels and types Evaluations can be conducted at 3 levels Policy level To assess the effectiveness with which policies are implemented and how well they achieve their objectives Program level To assess the effectiveness with which programs are implemented and coordinated How well each component achieves its objectives The effectiveness with the overall program achieves its broad goals and objectives Project level To assess the effectiveness with which projects are implemented and how well they achieve their objectives

There are 4 main types of evaluation Purpose Policy evaluations How well do the design, implementation and outcomes of national and sector policies contribute to EFGR goals? Formative evaluations Providing regular feedback to management and stakeholders on how the design and implementation of SDG projects contribute to EFGR goals Proposing corrective measures Learning lessons Developmental evaluations Similar to formative evaluation but focusing on innovative and emergent projects Assessing how complexity dimensions affect implementation and EFGR goals Summative (impact) evaluations Estimating quantitative (and sometimes qualitative) changes in EFGR indicators Assessing the extent to which these changes can be attributed to the SDG program

3. The main evaluation methodologies Experimental and quasi-experimental designs Statistical designs Theory-based evaluations (including theory of change) Case study methods Qualitative and participatory methods Review and synthesis Complexity responsive evaluations

A. Experimental and quasi-experimental designs Experimental designs [Randomized control trial = RCT] RCTs provide the strongest methodology for assessing project impacts Subjects are randomly assigned to receive the project treatment [water, school meals, small business training etc] or to be in a control group that does not receive the project treatment RCTs require large samples and are quite expensive Works better for simple project designs with few components and few outcomes than for large, complex programs. Ethical and political concerns using RCTs

Simple experimental design [RCT] Before project treatment Intervention After project implemented Project group P1 X P2 Control group C1 C2 Random assignment

Quasi-experimental design [QED] Similar design to RCT but random assignment is not possible and groups are matched using statistics or judgment Less precise than RCTs but can be used in a wider range of situations

Comparing RCTs and QEDs RCTs must be planned and administered before the project begins QEDs can be administered after the project has began RCT are methodologically more precise but QEDs are more flexible Both designs are quite expensive as they require large samples and a high level of research experience

B. Statistical designs Statistical modeling and econometric analysis. Mainly used at the national level to assess the impacts of policies or country-wide programs by comparing experiences with other countries Countries are matched on macro-indicators such as GDP growth, per capita income, investments in different sectors, rate of infrastructure construction. Useful for national programs where it is not possible to identify a comparison group (counter-factual) within the country

C. Theory-based evaluation [theory of change] Approaches such as theory of change are used to explain the steps through which an intervention is intended to achieve its objectives. Identifies the key assumptions on design, implementation strategies and potential constraints which must be tested A good theory of change should also focus on broader social, political, economic, legal and other factors affecting project implementation and outcomes Program impact and effectiveness can be assessed by comparing actual implementation experience with the TOC

Theory-based evaluation [continued] Theories of change (TOC) can be used to provide a framework for every kind of evaluation design TOCs help provide a framework for learning lessons from the experience of project design, implementation and outcomes.

D. Case-based approaches The case is taken as the unit of analysis A case may be an individual, household, community, organization, state or even a country A sample of cases are selected to: be representative of a total population To illustrate particular sub-groups (the most or least successful, outliers etc) The analysis can be qualitative and descriptive or quantitative Case-based methods can be used as stand-alone evaluations or to illustrate and explore groups identified in surveys.

Qualitative comparative analysis (QCA) QCA is a form of case study that has become popular in recent years It requires a higher level of technical expertise but can provide more rigorous estimates of project effectiveness and outcomes QCA identifies: the combination of factors that must be present for a project to achieve its intended outcomes The combination of factors that prevents a project from achieving its intended outcomes Usually requires a sample of at least 15 cases

E. Participatory and qualitative analysis Involves a wide range of stakeholders in design, implementation and interpretation of the evaluation Can be used for methodological reasons or to support a rights-based/ empowerment approach Provides a deeper understanding of the lived-experience of different groups Gives voice to poor and vulnerable groups Analysis of processes Helps understand the influence of contextual factors and processes of social control

Participatory and qualitative analysis [continued] Uses a mixed methods approach combining a wide range of tools and techniques for data collection and analysis Participatory approaches are in-line with the human rights and gender equality evaluations proposed by UNEG

Examples of participatory and qualitative methods Participatory consultative methods such as PRA Outcome mapping and outcome harvesting Most significant change Participant observation Key informant interviews Community consultations and some kinds of focus groups Longitudinal case studies

F. Review and synthesis studies Identification of all evaluations conducted on a particular topic (e.g. the effects of micro-credit on women’s empowerment) Selection of studies that satisfy standards of methodological rigor Often only randomized control trials are acepted May incorporate a theory of change to structure findings Summary of findings and lessons that are statistically sound Provides a useful starting point for designing programs and indicates what kinds of outcomes can be expected – and what is not likely to be achieved

G. Complexity-responsive evaluations Not widely-used but important for EFGR as programs with a gender or equity focus frequently involve dimensions of complexity Complexity-responsive evaluations can largely use familiar evaluation tools described in this session but it is necessary to begin with a complexity diagnostic to understand why and how the program is complex Conclude by reassembling all of the individual evaluations to understand the big picture Complexity is important because often each program component has a positive rating but the overall program fails to achieve its broader objectives

4. Principles for integrating EFGR evaluations into the SDGs Begin with a review of lessons learned from past approaches and evaluations build gender and equality into the theory of change and results framework Develop a checklist of areas where EFGR principles and indicators can be integrated into the evaluation process Begin with rapid diagnostic studies to help understand the EFGR issues which should be addressed Integrate gender and equality into ongoing or planned evaluations