Presentation is loading. Please wait.

Presentation is loading. Please wait.

Danish Evaluation Society Conference Kolding, September 2008

Similar presentations


Presentation on theme: "Danish Evaluation Society Conference Kolding, September 2008"— Presentation transcript:

1 Workshop on Using Contribution Analysis to Address Cause-Effect Questions
Danish Evaluation Society Conference Kolding, September 2008 John Mayne, Advisor on Public Sector Performance

2 Workshop Objectives Understand the need to address attribution
Understand how contribution analysis can help Have enough information to undertake a contribution analysis on your own

3 Outline Dealing with attribution Contribution analysis Working a case
Levels of contribution analysis Conclusions

4 The challenge Attribution for outcomes always a challenge
Strong evaluations (such as RCTs) not always available or possible A credible performance story needs to address attribution Sensible accountability needs to address attribution Complexity significantly complicates the issue What can be done? Recent example of a non-experimental situation mentioned on the internet was evaluations of the Tsunami aid that are being solicited, for example, by UNICEF.

5 The idea Based on the theory of change of the program,
Buttressed by evidence validating the theory of change, Reinforced by examination of other influencing factors, Contribution analysis builds a reasonably credible case about the difference the program is making

6 The typical context A program has been funded to achieve intended results The results have occurred, perhaps more or less It is recognized that several factors likely ‘caused’ the results Need to know what was the program’s role in this

7 Two measurement problems
Measuring outcomes Linking outcomes to actions (activities and outputs), i.e. attribution Are we making a difference with our actions?

8 Attribution Outcomes not controlled; are always other factors at play
Conclusive causal links don’t exist Are trying to understand better the influence you are having on intended outcomes Need to understand the theory of the program, to establish plausible association Something like contribution analysis can help This is what practical attribution is all about.

9 The need to say something
Many evaluations and most public reporting are silent on attribution Credibility greatly weakened as a result In evaluations, in performance reporting and in accountability, something be said about attribution

10 Proving Causality The gold standard debate (RCTs et al)
Intense debate underway, especially in development impact evaluation Some challenge on RCTs (e.g. Scriven) Does appear if RCTs have limited applicability Then what do we do? Need to know better when we can use RCTs: what circumstances, what settings?

11 Proving Causality AEA and EES: many methods capable of demonstrating scientific rigour Methodological appropriateness for given evaluation questions Causal analysis: auto mechanic, air crashes, forensic work, doctors—Scriven’s Modus Operandi approach Always other factors at play Conclusive causal inks don’t exist Are trying to understand better the influence you are having on intended outcomes Plausible association

12 Theory-based evaluation
Reconstructing the theory of the program Assess/test the credibility of the micro-steps in the theory (links in the results chain) Developing & confirming the results achieved by the program I am building here on the increasingly used theory-based approaches to evaluation.

13 Contribution analysis: the theory
There is a postulated theory of change The activities of the program were implemented The theory of change is supported by evidence Other influencing factors have been assessed & accounted for Therefore The program very likely made a contribution 1. There is a reasoned postulated theory of change for the program. It makes sense, it is plausible, and is agreed by at least some of the key players. If there is no ToC, experiment! 2. The activities of the program were implemented. 3. The theory of change—or key elements thereof— is supported by and confirmed by evidence, both of experts and of facts: the chain of expected results occurred. 4. Other influencing factors have been assessed and either shown not to have made a significant contribution, or their relative role in contributing to the desired result has been recognized. Then can say with some confidence that the program has indeed contributed to the observed desired results. Seeking plausible association. Trying to reduce uncertainty about the effects of the program.

14 Steps in Contribution Analysis
1. Set out the attribution problem to be addressed 2. Develop the postulated theory of change 3. Gather the existing evidence on the ToC 4. Assemble & assess the contribution story 5. Seek out additional evidence 6. Revise & strengthen the contribution story 7. Develop the complex contribution story I set out here the basic steps in a CA. The process is iterative and best developed over time.

15 1. Set out the attribution problem
Acknowledge the need to address attribution Scope the attribution problem What is really being asked What level of confidence is needed? Explore the contribution expected What are the other influencing factors? How plausible is a contribution?

16 Cause-Effect Questions
Traditional attribution questions Has the program caused the outcome? How much of the outcome is caused by the program? Contribution questions Has the program made a difference? How much of a difference?

17 Cause-Effect Questions
Management questions Is it reasonable to conclude that the program made a difference? What conditions are needed to make this type of program succeed? Why has the program failed?

18 building an evaluation office contribution story
Step 1 building an evaluation office contribution story Evaluation aim is to ‘make a difference’ (an outcome) e.g., improvements in management and reporting, more cost-effective public service, enhanced accountability, etc. Evaluation products (outputs): Evaluations and evaluation reports Advice and assistance The performance story of an evaluation unit is as much of a challenge to tell as that of many other programmes. The unit is trying to influence programmes to do the things better. There are a number of outputs produced towards that end.

19 2. Develop the ToC and Risks to It
Build the postulated results chain and ToC Identify roles played by other influencing factors Identify the risks to the assumptions Determine how contested the ToC is

20 A results chain External Factors A results chain is a simplification
activities (how the program carries out its work) Examples negotiating, consulting, inspecting, drafting legislation outputs (goods and services produced by the program) Examples checks delivered, advice given, people processed, information provided, reports produced Immediate outcomes (the first level effects of the outputs) Examples actions taken by the recipients, or behaviour changes External Factors Results intermediate outcomes (the benefits and changes resulting from the outputs) Examples satisfied users, jobs found, equitable treatment, illegal entries stopped, better decisions made A results chain is a simplification But it should show the overall structure of how the program is to work There is always a question of level of detail in a results chain/logic chart. Depends on the purpose intended. end outcomes (the final or long-term consequences) Examples environment improved, stronger economy, safer streets, energy saved

21 Results chain links External Factors
activities (how the program carries out its work) Examples negotiating, consulting, inspecting, drafting legislation outputs (goods and services produced by the program) Examples checks delivered, advice given, people processed, information provided, reports produced Why will these immediate outcomes come about? Immediate outcomes (the first level effects of the outputs) Examples actions taken by the recipients, or behaviour changes External Factors Results intermediate outcomes (the benefits and changes resulting from the outputs) Examples satisfied users, jobs found, equitable treatment, illegal entries stopped, better decisions made A results chain can and should include the theory of change behind the program, i.e. describe why the links in the chain make sense. Is it a question of belief or is there some empirical evidence, perhaps from some social science theory, behind the assumptions? With an idea of the theory of change for the program, we have a framework for the story of the program: what the program is intended to do and why. We have the story line. But of course, much of the story line remains wishes and hopes until there is some evidence to confirm it. This is where the nitty-gritty of evaluation appears on the scene. Key in setting out your results chain is to be aware of external factors that may be influencing events. Your theory of change may be nice but irrelevant if outside factors are the main forces driving your expected outcomes. In the end, you are comparing a postulated results chain (theory of change) with an observed results chain. It’s the links not the boxes that are key end outcomes (the final or long-term consequences) Examples environment improved, stronger economy, safer streets, energy saved

22 Anti-smoking campaign
Theories of change A results chain with embedded assumptions and risks identified An explanation of why the results chain is expected to work; what has to happen Reduction in smoking Anti-smoking campaign Assumptions: target is reached, message is heard, message is convincing, no other major influences at work Risks: target not reached, poor message, peer pressure very strong This is the key tool for developing meaningful and useful performance management regimes. Developing logic models is very useful to help in thinking through the programme or project under consideration, I.e. for planning. The idea is to identify the underlying assumptions of why you think your activities will result in the attainment of the desired outcomes; why you think you will make a difference.

23 Strengthened management of agriculture research
Institutionalization of integrated PM&E systems and strategic management principles Enhanced planning processes, evaluation systems, monitoring systems, and professional PM&E capacities More effective, efficient and relevant agricultural programs information training and workshops facilitation of organizational change outputs immediate outcomes intermediate outcomes final outcomes (impacts) (impacts Assumptions: Intended target audience received the outputs. With hands on, participatory assistance and training, AROs will try enhanced planning, monitoring and evaluation approaches. Risks: Intended reach not met; training and information not convincing enough for AROs to make the investment; only partially adopted to show interest to donors. Assumptions: Over time and with continued participatory assistance, AROs will integrate these new approaches into how they do business. The projects activities complement other influencing factors. Risks: Trial efforts do not demonstrate their worth; pressures for greater accountability dissipate; PM&E systems sidelined. Assumptions: The new planning, monitoring and evaluation approaches will enhance the capacity of the AROs to better manage their resources. Risks: Management becomes too complicated; PM&E systems become a burden; information overload; evidence not really valued for managing Assumptions: Better management will result in more effective, efficient and relevant agricultural programs. Risks: New approaches do not deliver (great plans but poor delivery); resources cut backs affect PM&E first; weak utilization of evaluative information. Results Chain Theory of Change: Assumptions and Risks Figure 1 Enhancing Management Capacity in Agricultural Research Organizations (AROs) Adapted from Horton, Mackay, Anderson and Dupleich (2000). The idea here was that for these research institutes, focused on science, better information on planning, coupled with training and workshops, plus facilitation would lead to better planning, managing and ultimately more effective programmes.

24 Theory one: Classification The quality of particular aspects of health care can be monitored and measured to provide valid and reliable rankings of comparative performance Theory two: Disclosure Information on the comparative performance and the identity of the respective parties is disclosed and publicised through public media Theory six: Rival Framing The ‘expert framing’ assumed in the performance measure is distorted through the application of the media’s ‘dominant frames’ Theory four: Response Parties subject to the public notification measures will react to the sanctions in order to maintain position or improve performance Theory five: Ratings Resistance The authority of the performance measures can be undermined by the agents of those measured claiming that the data are invalid and unreliable Theory seven: Measure manipulation Response may be made to the measurement rather than its consequences with attempts to outmanoeuvre the monitoring apparatus Theory three a, b, c, d Alternative sanctions The sanction mounted on the basis of differential performance operate through: a) ‘regulation’ b) ‘consumer choice’ c) ‘purchasing decisions’ d) ‘shaming’ Theory three: Sanction Members of the broader health community act on the disclosure in order to influence subsequent performance of named parties Figure 2 An initial ‘theory map’ of the public disclosure of health care information From Pawson et al. (2005)

25 Step 2 Theory of Change for an Evaluation Office Results Chain Outputs
Evaluation Studies participation Evaluation Reports findings & conclusions recommendations Outputs Advice Office has credibility and evidence Enhanced value of evaluative thinking better informed management acceptance of recommendations & advice Immediate Outcomes Changes not planned anyway Better designed programs Better data for evaluations Intermediate Outcomes implementation of recommendations & advice Recommendations work managers’ & organisation initiatives Other influencing factors better management practices Recommendations work More effective programs informed decision-making productive operations cost-effective programs Better benefits to citizens Our contribution story line Final Outcomes

26 3. Gather existing evidence
Assess the logical robustness of the ToC Gather available evidence on Results Assumptions Other influencing factors

27 4. Assemble and assess the contribution story
Set out the contribution story Assess its strengths and weaknesses Refine the ToC

28 Theory of change analysis
Need to identify which of the links in the results chain have the weakest evidence Some may be supported by prior research Some may be well accepted But some may be a large leap of faith, or the subject of debate With limited resources, these contested links are where effort should be focused

29 5. Seek out additional evidence
Determine what is needed Gather new evidence

30 Strengthening Techniques
Refine the results chain and/or gather additional results data Survey knowledgeable others involved Track program variations and their impacts (time, location, strength) Undertake case studies Identify relevant research or evaluation Use multiple lines of evidence Do a focused mini-evaluation The idea is that over time, the results chain becomes better populated with data and evidence on the links.

31 The Agr Research Orgs evaluation
CA done: Theory of change developed Other influencing factors recognized The theory of change was revised based on lessons learned CA that could have been done: A more CA structured approach More analysis of other factors More attention to the risks faced

32 6. Revise and strengthen the contribution story
Build the more credible contribution story Reassess its strengths and weaknesses Revisit step 5

33 A CA Case Study Patton (2008). Advocacy Impact Evaluation. JMDE, 5(9): 1-10. Collaboration of agencies spent over $2M on a campaign to influence a Supreme Court decision Evaluation Issue: Did it work? Conclusion: the campaign contributed significantly to the Court’s decision

34 Features Was a stealth campaign
Evaluation used Scriven’s General Elimination Method, or the modus operandi approach. Undertook considerable document review and interviews, an in-depth case study which served as the evidence for the evaluation GEM: Using evidence gathered through fieldwork—interviews, document analysis, detailed review of the Court arguments and decision, news analysis, and the documentation of the campaign itself—we aimed to eliminate alternative or rival explanations until the most compelling explanation, supported by the evidence, remained.

35 Cause-effect Attribution vs contribution
Attribution concepts don’t work well in complex settings Contribution analysis identifies likely influences Case examined 2 alternative possible influences Where attribution requires making a cause-effect determination, contribution analysis focuses on identifying likely influences. Contribution analysis, like detective work, requires connecting the dots between what was done and what resulted, examining a multitude of interacting variables and factors, and considering alternative explanations and hypothesis, so that in the end, we can reach an independent, reasonable, and evidence-based judgement on the cumulative evidence. From a contribution perspective, the question became how much influence the campaign appeared to have had rather than whether the campaign directly produced the observed results. Other influences were (1) the justices made their decision entirely based on law and their prior dispositions rather than being influenced by external influences, and (2) that external influences other than the campaign had more impact. The preponderance of evidence supports neither of these alternative conclusions.

36 Levels of contribution analysis
Minimalist contribution analysis Contribution analysis of direct influence Contribution analysis of indirect influence I suggest that there are three levels of CA that can be done: Minimalist Develop the ToC Confirm the expected outputs were delivered Direct influence Minimalist plus Expected direct results occurred (immediate outcomes) There is evidence that the program was influential Indirect influence Direct plus Evidence that intermediate and end outcomes occurred ToC played out

37 Minimalist CA Develop the theory of change
Confirm that the expected outputs were delivered then, Based on the strength of the theory of change, conclude the program made a contribution But what about other influencing factors? Can always do the other factor analysis. What to call this? Weak CA? minimalist CA? Although weak, is much much better than what one often sees, namely, nothing!

38 Other influencing factors
Literature and knowledgeable others can identify the possible other factors Reflecting on the theory of change may provide some insight on their plausibility Prior evaluation/research may provide insight Relative size compared to the program intervention can be examined Knowledgeable others will have views on the relative importance of other factors And at any stage, one can examine/assess the role played by other influencing factors.

39 CA of direct influence Minimalist CA, plus
Verifying the expected direct outcomes occurred Confirming the assumptions associated with the direct outcomes Accounting for other influencing factors

40 CA of indirect influence
CA of direct influence, plus Verifying the intermediate and final outcomes occurred Confirming the assumptions associated with these indirect outcomes Accounting for other influencing factors Would do as much as you could.

41 A credible contribution statement
Description of program context and other influencing factors A plausible theory of change Confirmed program activities, outputs and outcomes CA findings: evidence supporting the ToC and assessment of other influencing factors Discussion of the quality of evidence

42 When is CA useful? Program is not experimental
Funding is based on a theory of change Program has been in place for some time No real scope for varying the intervention(s)

43 Contribution analysis
Builds evidence on Immediate/intermediate outcomes, the behavioural changes Links in the results chain Other influencing factors at play Other explanations for observed outcomes Contribution Evaluation If you can do a well designed evaluation, go for it. RCTs even better. Otherwise, CA can provide very useful information. Contribution analysis aims at confirming or revising a theory of change, not primarily to discover a theory of change. It is not a panacea for addressing attribution, but can provide useful information on the contribution a program is making. Neither is it a cop-out, allowing a limited belief-statement of a theory of change to cover the attribution issue. It requires careful thought, in depth analysis, evidence gathering, testing and re-testing A good CA is a theory-based evaluation, perhaps a contribution evaluation. Might be a useful concept to develop since so many evaluation don’t address attribution at all.


Download ppt "Danish Evaluation Society Conference Kolding, September 2008"

Similar presentations


Ads by Google