Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield.

Similar presentations


Presentation on theme: "Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield."— Presentation transcript:

1 Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield Trust conference, ‘Evaluation of complex care 2015’, 22 June 2015 Improving health worldwide www.lshtm.ac.uk

2 My argument Tendency to over-emphasise study design and methods, and the promotion of more ‘robust’ approaches (e.g. RCTs) Not enough attention to the policy/decision system within which the evaluations are to be used For example, today’s programme is described as being focused on ‘the practical applications of evaluation’, yet it is mostly about techniques for doing evaluations Advocacy tends to neglect: the purpose of evaluation; the stakeholders in the policy/programme; the audience; the feasibility of using the findings (e.g. how much ‘decision space’ is available?) Improving health worldwide www.lshtm.ac.uk

3 Plethora of advocacy and advice on evidence based policy and evaluation

4 In particular, advocacy of more learning from policy experiments in the shape of RCTs “Randomised trials are our best way to find out if something works: by randomly assigning participants to one intervention or another, and measuring the outcome we’re interested in, we exclude all alternative explanations for any difference between the two groups. If you don’t know which of two reasonable interventions is best [sic], and you want to find out, a trial will tell you.” Ben Goldacre, The Guardian, 14 May 2011

5 Accompanied by an epidemic of pilots, trailblazers, demonstrators, pioneers, vanguards …… Vanguards: Integrated Primary and Acute Care Systems – joining up GP, hospital, community and mental health service

6 Some evaluation guides begin to look at wider issues Why do an evaluation? What are the different types of evaluation? What are the design considerations for an evaluation? What are we comparing our intervention with? How does evaluation differ from other forms of measurement? What practical issues should we consider? When should we start and finish an evaluation? How do we cope with changes in the intervention when the evaluation is underway? Should we do the evaluation ourselves or commission an external team? How do we communicate evaluation findings? http://www.health.org.uk/publication/evaluation- what-consider#sthash.I5g0rRPr.dpuf

7 Ten points to consider when initiating ‘pilots’ and planning their evaluation From: Ettelt S, Mays N. (2015) Advice on commissioning external academic evaluations of policy pilots in health and social care. London: Policy Innovation Research Unit, LSHTM, forthcoming

8 The ten points 1.Clarify the purpose of the programme/pilot 2.Identify the primary audience 3.Relate the evaluation design to the purpose of the programme/pilot 4.Identify how the findings could be used 5.Anticipate that the setting up of the programme/pilot will take longer than expected 6.Tease out the programme/pilot ‘intervention logic’ 7.Obtain and maintain commitment from pilot sites 8.If you consider an RCT, think about the implications (including for 1-5 & 7, above) 9.Consider the implications of different types of evaluator and evaluation stance 10.Anticipate that one evaluation is unlikely to produce definitive answers

9 1. Clarify the purpose of the pilot Often seen as self-evident but important to be clear Usually relates to how fully developed the pilot/programme is seen to be Multiple purposes for ‘piloting’ 1.Testing policy effectiveness (‘does it work?’) 2.Promoting implementation (e.g. trailblazers; demonstrators) 3.Identifying policy innovations (e.g. pioneers) These can conflict and the differences are often unacknowledged – different participants can assume different purposes, adding to complexity Have major implications for the design of the pilot Probably the most important distinction is between 1 and the rest since this affects the comparison – 2 and 3 are likely to focus on a comparison of different forms of intervention (2) or approaches to the problem (3) – 1 would tend to compare the (new) intervention with usual practice/status quo (a ‘control’)

10 4. Identify how the findings could be used In what circumstances could the findings be of value? Consider the scope of action possible for the main audience if the findings are favourable and if they are unfavourable to the policy – need to define in advance what ‘success’ or ‘favourable’ would look like If they are not favourable, what room for manoeuvre might key decision makers have?

11 5. Anticipate that setting up pilots will take longer than expected Setting up pilots locally can take a lot longer than expected – the degree of novelty and change required is often underestimated – This is particularly important for outcome evaluation It is important to understand causes of delays – Policy that is intrinsically or contextually unsuitable versus lack of skills among implementers Working out the ingredients of a programme (i.e. by describing activities needed in individual sites for implementation) often takes time but is likely to pay off in the long term

12 7. Obtain and maintain commitment from pilot sites There are strong incentives on sites to volunteer (kudos, interest in promoting change locally, additional funding if available) – but these are often not matched by the level of commitment needed throughout the duration of the programme and its evaluation Balance of central input and local scope for trial and error needs to be considered – If outcome evaluation is the aim, the scope for local trial and error of implementation is smaller, i.e. more input is needed (including researcher control over data collection)

13 8. If you consider an RCT, think about the implications Requirements of robust outcome evaluation – Clarity about intervention mechanism(s) – A degree of policy/programme stability or consistency across sites – Scale to achieve statistical significance Additional requirements for RCTs – Ability to maintain genuine uncertainty (equipoise) – Ability of researchers to control recruitment Implementing policy as RCT can reduce its odds of success – it may reduce implementers’ confidence in the policy and thereby reduce the vigour with which it is implemented

14 10. Anticipate that one evaluation is unlikely to produce definitive answers Scientific reasons – Findings will be context dependent – Better knowledge tends to generate more questions Policy reasons – Complexity of many policy innovations and range of implementation settings unlikely to be addressed in a single study no matter how large or comprehensive Political reasons – Conflicts over policy goals and underlying values of policy will persist irrespective of evidence – The debate will include the qualities of the study

15 But this is not a counsel of despair Evaluation of policy can still provide substantial insight and illumination to guide future decisions

16 Sources Ettelt S, Mays N, Allen P. (2015) The multiple purposes of policy piloting and their consequences: Three examples from national health and social care policy in England. Journal of Social Policy 44 (2): 319-337 Ettelt S, Mays N, Allen P. (2015) Policy experiments – investigating effectiveness or confirming direction. Evaluation (in press) Ettelt S, Mays N. (2015) RCTs - how compatible are they with contemporary health policy-making? British Journal of Healthcare Management (in press) HM Treasury (2011) The Magenta Book: guidance for evaluation. London: HM Treasury MRC (2008) Developing and evaluating complex interventions: new guidance. London: Medical Research Council MRC (2015) Process evaluation of complex interventions. London: Medical Research Council


Download ppt "Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield."

Similar presentations


Ads by Google