Food and Agriculture Organization of the United Nations

Slides:



Advertisements
Similar presentations
Postgraduate Course 7. Evidence-based management: Research designs.
Advertisements

The complex evaluation framework. 2 Simple projects, complicated programs and complex development interventions Complicated programs Simple projects blue.
Mywish K. Maredia Michigan State University
Good Evaluation Planning – and why this matters Presentation by Elliot Stern to Evaluation Network Meeting January 16 th 2015.
VIII Evaluation Conference ‘Methodological Developments and Challenges in UK Policy Evaluation’ Daniel Fujiwara Senior Economist Cabinet Office & London.
1 Selecting A Research Method. 2 Selecting research methods How do we answer our research questions? Methodology depends on question, budget, timing,
Chapter 5 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
Network of Networks on Impact Evaluation Impact evaluation design for: PADYP Benin Jocelyne Delarue - AFD NONIE design clinic 1 Cairo April, Original.
Evaluation methods and tools (Focus on delivery mechanism) Jela Tvrdonova, 2014.
ALTERNATIVE APPROACHES FOR NUTRITION DATA COLLECTION IN DEVELOPING COUNTRIES.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Evaluating FAO Work in Emergencies Protecting Household Food Security and Livelihoods.
EVALUATION APPROACHES Heather Aquilina 24 March 2015.
 2008 Johns Hopkins Bloomberg School of Public Health Evaluating Mass Media Anti-Smoking Campaigns Marc Boulay, PhD Center for Communication Programs.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Quantitative and Qualitative Approaches
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Africa RISING M&E Expert Meeting Addis Ababa, 5-7 September 2012.
Monitoring and evaluation Objectives of the Session  To Define Monitoring, impact assessment and Evaluation. (commonly know as M&E)  To know why Monitoring.
Quality Evaluations in Education Interventions 1 March 2016 Dr Fatima Adam Zenex Foundation.
Monitoring and evaluation 16 July 2009 Michael Samson UNICEF/ IDS Course on Social Protection.
Evaluation What is evaluation?
Impact evaluations of the UNICEF-IKEA Foundation programme on Improving Adolescents Lives in Afghanistan, India and Pakistan: Integrating an equity and.
Module 8 Guidelines for evaluating the SDGs through an equity focused and gender responsive lens: Overview Technical Assistance on Evaluating SDGs: Leave.
Performance Indicators
Typical farms and hybrid approaches
4.05 Understand marketing-research design considerations to evaluate their appropriateness for the research problem/issue 4.00 Understand promotion and.
Participatory impact assessments in protracted crises and conflict contexts Methodology lessons from South Sudan UNEG EPE 2016 –Humanitarian Stream: Evaluation.
Introduction to Marketing Research
Issues in Evaluating Educational Research
Technical Assistance on Evaluating SDGs: Leave No One Behind
Evaluation: For Whom and for What?
FUNDAMENTALS OF EVALUATION Part 2:Addressing Constraints and Weaknesses in Evaluation Design May 23, 2012 YALE-GRIFFIN PREVENTION RESEARCH CENTER David.
Measuring Results and Impact Evaluation: From Promises into Evidence
Technical Assistance on Evaluating SDGs: Leave No One Behind
Commercial Agriculture Development Project Impact Evaluation
General belief that roads are good for development & living standards
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
UKES Annual Conference
Impact evaluations at IFAD-IOE
Evaluation of Nutrition-Sensitive Programs*
Lecture on Primary Data Collection
Consumer Research.
Chapter Three Research Design.
Multi-Sectoral Nutrition Action Planning Training Module
Evaluating adaptation
11/20/2018 Study Types.
Monitoring and Evaluation of Postharvest Training Projects
Development Impact Evaluation in Finance and Private Sector
MONITORING AND EVALUATION IN FOOD SECURITY AND NUTRITION INTERVENTIONS KOLLIESUAH, NELSON P.
Quasi-Experimental Design
Implementation Challenges
Using Relational Coordination as a Framework and Intervention to Support Collaborative Policing and Improve Crime Outcomes 7th Annual RCRC Roundtable.
REAL (working with Singizi) Presentation to NSA 3rd/4th August
Impact Evaluation Methods: Difference in difference & Matching
Evaluating Impacts: An Overview of Quantitative Methods
Sampling for Impact Evaluation -theory and application-
Research Design Quantitative.
Participatory Rural Appraisal.
MONITORING AND EVALUATION IN TB/HIV PROGRAMS
Positive analysis in public finance
Monitoring and Evaluating FGM/C abandonment programs
Meaningful Evaluation: improving use and results
of the impact of 4 projects of the governmental cooperation
Integrating Gender into Rural Development M&E in Projects and Programs
Case study from a market-systems development programme in Ethiopia
Title Team Members.
Presentation transcript:

Food and Agriculture Organization of the United Nations Office of Evaluation March 2015 UNEG EPE 2015 FAO Impact Assessments in Recovery Context

Fragile, insecure and volatile context FAO cash-for-work programme in Somalia FAO livelihood recovery programme post-floods in Pakistan. FAO South Sudan response programme. Field impact assessment in fragile, insecure and volatile context Under the restoring livelihoods/recovery in food security and nutrition sector. High demand for evaluation and FIA, Large interventions, complexity of many community level interventions Crowded operating environment, Weak M&E data (no baseline, high output delivery…est.) FAO Office of Evaluation

Traditional methods and its limitations Randomized Control Trial (RCT) or statistical counterfactuals. Quasi-experimental designs with a pretest/posttest comparison and a matched control group. Non-experimental options (observation, interactions, surveys or case studies). For RCTs = no available data for social accounting matrixes, market data..est. very expensive For statistical counterfactuals and posttest/pretest = no comparison group, control groups can not be identified (IDPs), no baselines, no access volatile environment…est. Non-experimental options = Systematic positive biased (we rely on direct beneficiaries and implementing partners). FAO Office of Evaluation

FAO Office of Evaluation The alternatives Realist evaluations. Process tracing. Participatory Rural Appraisal (PRA) time-related and relation techniques. Creative identification of comparison groups (Judgmental Matching). Pipeline design. Creative uses of secondary data. Realist evaluations: The approach focuses on the specific context in which a programme is implemented and addresses the questions: “What works?”, “For whom?”, “When?” and “Why?” Whereas the conventional counterfactual focuses on the comparison group and often pays very little attention to the “factual”, realist evaluation seeks to understand in detail the mechanisms through which an intervention operates and how different sectors of the target population are affected. Process tracing: Looks at the little steps along the way and asks “Could there have been another way of doing each step? And what difference would this have made?” – breaking up the intervention into compounents, very useful but needs a sound ToC. PRA time-related and relation techniques: Social Mapping, Pairwise Ranking & Problem Analysis, Well-being analysis…est. Judgmental Matching: involves creating a comparison group by finding a match for each site in the treatment group based on the evaluation team judgements about what variables are important Rather than comparisons between areas with and without projects, and can be used to compare their project with alternative interventions. Pipeline design: When programme is implemented in phases over a period of time (pilot areas then scaled up), the segments of the population only affected by the latter phases can be used as comparison groups for the earlier phases. Creative uses of secondary data: The object is to find secondary data that describes in sufficient detail what change occurred in reasonably comparable communities during the time-frame of the programme. Though this can be done qualitatively, adding a IPPC Map FAO Office of Evaluation

FAO Office of Evaluation Examples FAO cash-for-work programme in Somalia: Participatory Rural Appraisal (PRA) FAO livelihood recovery programme in Pakistan Pipeline design FAO South Sudan response programme Judgmental Matching PRA time-related and relation techniques: Social Mapping, Pairwise Ranking & Problem Analysis, Well-being analysis…est. Pipeline design: When programme is implemented in phases over a period of time (pilot areas then scaled up), the segments of the population only affected by the latter phases can be used as comparison groups for the earlier phases. Judgmental Matching: involves creating a comparison group by finding a match for each site in the treatment group based on the evaluation team judgements about what variables are important Rather than comparisons between areas with and without projects, and can be used to compare their project with alternative interventions. FAO Office of Evaluation

FAO Office of Evaluation Lessons learned - 1 Highly useful - reveals a range of programmatic issues that feeds into future project designs, corrective actions and better targeting. Quality secondary data can be used to measure quantitative outcome and impact indicators. Attributing causality remains a challenge for generalization, thought provides details on detailed dynamics to paint the big picture . Timing of the impact assessment is key. Important to assess the adequacy of the secondary data in terms of sample coverage, when it was collected, and how the indicators were measured. Often secondary data is not available to control for alternative explanations of the observed changes (such as special characteristics of the project group making them more likely to succeed). FAO Office of Evaluation

FAO Office of Evaluation Lessons learned - 2 Selecting the best available comparison group on which data can be collected within tight budget and time frames. Comparison with other programs that use different approaches for achieving similar objectives. In order to obtain a counterfactual, similar PRA exercises could be conducted in comparable communities. Sometimes these variations were built into the programme design, in other cases they occur because of unforeseen circumstances or because each project is given considerable autonomy in terms of how programmes are organized. In order to obtain a counterfactual, similar PRA exercises could be conducted in comparable communities, asking them to identify changes that occurred in their lives relative to the kinds of changes the programme was promoting in the target communities. FAO Office of Evaluation

FAO Office of Evaluation Thank you FAO Office of Evaluation