Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield.

Slides:



Advertisements
Similar presentations
From Research to Advocacy
Advertisements

Engaging Patients and Other Stakeholders in Clinical Research
HR Manager – HR Business Partners Role Description
Mywish K. Maredia Michigan State University
Well Connected: History A reminder - previous presentation in December 2013: Arose out of Acute Services Review Formal collaboration between WCC, all.
National Human Resources for Health Observatory HRH Research Forum Dr. Ayat Abuagla.
Bologna Process in terms of EU aims and objectives
SELLING YOUR DOCTORAL RESEARCH PROGRAMME Dr. Chris Burton & Steph Dolben Graduate School College of Health and Behavioural Sciences.
Extreme Outcomes The Strategic Treatment of Low Probability Events in Scientific Assessment Anthony Patt Global Environmental Assessment Project Harvard.
Integrated Personal Commissioning The NHS getting serious about personalisation 30 th October 2014.
BALANCING EFFICIENCY AND EQUITY A NEW INTERNATIONAL RESEARCH PROGRAMME ADDRESSING THE ROLE OF VALUES IN HEALTH CARE Department of Primary Care and Public.
© Nuffield Trust Inner North West London Integrated Care Pilot – year one evaluation 8 July 2013 Holly Holder Fellow in health policy Ian Blunt Senior.
1 Self-referral to Physiotherapy: The Evidence from the UK WCPT, European Region Workshop, Berlin 2010 Lesley Holdsworth Valerie Webster.
Global Poverty Action Fund Community Partnership Window Funding Seminar January 2014 Global Poverty Action Fund Community Partnership Window Funding Seminar.
Good Research Questions. A paradigm consists of – a set of fundamental theoretical assumptions that the members of the scientific community accept as.
Decision-Making and Strategic Information Workshop on M&E of PHN Programs July 24-August 11, 2006 Addis Ababa.
PPA 502 – Program Evaluation
Research Methods for Business Students
Dorota Kilańska RN, PhD European Nursing Research Foundation (ENRF)
FAO/WHO CODEX TRAINING PACKAGE
Best-Fit Evaluation Strategies: Are They Possible? John Carlo Bertot, John T. Snead, & Charles R. McClure Information Use Management and Policy Institute.
How to Develop the Right Research Questions for Program Evaluation
Findings from the Evaluation Dr Alison Carter, IES Associate 11 November 2014.
Data Management Development and Implementation: an example from the UK SLA Conference, Boston, June 2015 Geraldine Clement-Stoneham Knowledge and Information.
RESEARCH A systematic quest for undiscovered truth A way of thinking
Randomised controlled trials Peter John. Causation in policy evaluation Outcome Intervention Other agency actions External environment.
Striving for Quality Using continuous improvement strategies to increase program quality, implementation fidelity and durability Steve Goodman Director.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Development and management of child and adolescent mental health services across agency boundaries – the experience of the Behaviour Resource Service Jackie.
Performance Measurement and Analysis for Health Organizations
O F F I C E O F T H E Auditor General of British Columbia 1 OAG Review of the Performance Agreements between MoHS and Health Authorities.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
Introduction to MAST Kristian Kidholm Odense University Hospital, Denmark.
Home Truths: How well do you understand GPs? 18 th April 2013.
The BC Clinical Care Management Initiative as a Case Study in Large Scale Change CARES International Conference on Realist Approaches, October 29,
Evidencing Outcomes Ruth Mann / George Box Commissioning Strategies Group, NOMS February 2014 UNCLASSIFIED.
The Impact of Health Coaching
Implementation and process evaluation: developing our approach Ann Lendrum University of Manchester Neil Humphrey University of Manchester Gemma Moss Institute.
Feedback and Advice about Delivery Planning for Salford NRA Team.
BCO Impact Assessment Component 3 Scoping Study David Souter.
My healthy life Helen Mycock – Mencap Health programme manager.
Independent evaluation: the experience of undertaking policy evaluations commissioned by the Department of Health in England Nicholas Mays Director, Policy.
Mental Health Services Act Oversight and Accountability Commission June, 2006.
Sustainable Urban Transport Planning General Presentation.
THIS PRESENTATION IS INTENDED AS ONE COMPLETE PRESENTATION. HOWEVER, IT IS DIVIDED INTO 3 PARTS IN ORDER TO FACILITATE EASIER DOWNLOADING AND VIEWING,
+ IDENTIFYING AND IMPLEMENTING EDUCATIONAL PRACTICES SUPPORTED BY RIGOROUS EVIDENCE: A USER FRIENDLY GUIDE Presented by Kristi Hunziker University of Utah.
VALUE/Multi-State Collaborative (MSC) to Advance Learning Outcomes Assessment Pilot Year Study Findings and Summary These slides summarize results from.
Consultant Advance Research Team. Outline UNDERSTANDING M&E DATA NEEDS PEOPLE, PARTNERSHIP AND PLANNING 1.Organizational structures with HIV M&E functions.
Welcome to Program Evaluation Overview of Evaluation Concepts Copyright 2006, The Johns Hopkins University and Jane Bertrand. All rights reserved. This.
Five Year Forward View: Personal Health Budgets and Integrated Personal Commissioning Jess Harris January 2016.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
‘The right healthcare, for you, with you, near you’ Commissioning for Quality Deborah Fielding, Accountable Officer NHS Wiltshire CCG November 11 th 2015.
Evaluating Engagement Judging the outcome above the noise of squeaky wheels Heather Shaw, Department of Sustainability & Environment Jessica Dart, Clear.
HTA Efficient Study Designs Peter Davidson Head of HTA at NETSCC.
HPTN Ethics Guidance for Research: Community Obligations Africa Regional Working Group Meeting, May 19-23, 2003 Lusaka, Zambia.
Presentation By L. M. Baird And Scottish Health Council Research & Public Involvement Knowledge Exchange Event 12 th March 2015.
AssessPlanDo Review QuestionYesNo? Do I know what I want to evaluate and why? Consider drivers and audience Do I already know the answer to my evaluation.
1 Health Needs Assessment Workshop Sue Cavanagh Keith Chadwick.
Representing Simple, Complicated and Complex Aspects in Logic Models for Evaluation Quality Presentation to the American Evaluation Association conference,
Implementation Science: Finding Common Ground and Perspectives Laura Reichenbach, Evidence Project, Population Council International Conference on Family.
Bringing Diversity into Impact Evaluation: Towards a Broadened View of Design and Methods for Impact Evaluation Sanjeev Sridharan.
Fifth Edition Mark Saunders, Philip Lewis and Adrian Thornhill 2009 Research Methods for Business Students.
Evaluation Planning Checklist (1 of 2) Planning Checklist Planning is a crucial part of the evaluation process. The following checklist (based on the original.
Introduction Social ecological approach to behavior change
PRAGMATIC Study Designs: Elderly Cancer Trials
Evaluating Better Care Together
Dr Peter Groves MD FRCP Consultant Cardiologist
How to apply successfully to the NIHR HTA Board?
Presentation transcript:

Evaluation of complex policy and care: more than methods Nicholas Mays Professor of Health Policy Department of Health Services Research & Policy Nuffield Trust conference, ‘Evaluation of complex care 2015’, 22 June 2015 Improving health worldwide

My argument Tendency to over-emphasise study design and methods, and the promotion of more ‘robust’ approaches (e.g. RCTs) Not enough attention to the policy/decision system within which the evaluations are to be used For example, today’s programme is described as being focused on ‘the practical applications of evaluation’, yet it is mostly about techniques for doing evaluations Advocacy tends to neglect: the purpose of evaluation; the stakeholders in the policy/programme; the audience; the feasibility of using the findings (e.g. how much ‘decision space’ is available?) Improving health worldwide

Plethora of advocacy and advice on evidence based policy and evaluation

In particular, advocacy of more learning from policy experiments in the shape of RCTs “Randomised trials are our best way to find out if something works: by randomly assigning participants to one intervention or another, and measuring the outcome we’re interested in, we exclude all alternative explanations for any difference between the two groups. If you don’t know which of two reasonable interventions is best [sic], and you want to find out, a trial will tell you.” Ben Goldacre, The Guardian, 14 May 2011

Accompanied by an epidemic of pilots, trailblazers, demonstrators, pioneers, vanguards …… Vanguards: Integrated Primary and Acute Care Systems – joining up GP, hospital, community and mental health service

Some evaluation guides begin to look at wider issues Why do an evaluation? What are the different types of evaluation? What are the design considerations for an evaluation? What are we comparing our intervention with? How does evaluation differ from other forms of measurement? What practical issues should we consider? When should we start and finish an evaluation? How do we cope with changes in the intervention when the evaluation is underway? Should we do the evaluation ourselves or commission an external team? How do we communicate evaluation findings? what-consider#sthash.I5g0rRPr.dpuf

Ten points to consider when initiating ‘pilots’ and planning their evaluation From: Ettelt S, Mays N. (2015) Advice on commissioning external academic evaluations of policy pilots in health and social care. London: Policy Innovation Research Unit, LSHTM, forthcoming

The ten points 1.Clarify the purpose of the programme/pilot 2.Identify the primary audience 3.Relate the evaluation design to the purpose of the programme/pilot 4.Identify how the findings could be used 5.Anticipate that the setting up of the programme/pilot will take longer than expected 6.Tease out the programme/pilot ‘intervention logic’ 7.Obtain and maintain commitment from pilot sites 8.If you consider an RCT, think about the implications (including for 1-5 & 7, above) 9.Consider the implications of different types of evaluator and evaluation stance 10.Anticipate that one evaluation is unlikely to produce definitive answers

1. Clarify the purpose of the pilot Often seen as self-evident but important to be clear Usually relates to how fully developed the pilot/programme is seen to be Multiple purposes for ‘piloting’ 1.Testing policy effectiveness (‘does it work?’) 2.Promoting implementation (e.g. trailblazers; demonstrators) 3.Identifying policy innovations (e.g. pioneers) These can conflict and the differences are often unacknowledged – different participants can assume different purposes, adding to complexity Have major implications for the design of the pilot Probably the most important distinction is between 1 and the rest since this affects the comparison – 2 and 3 are likely to focus on a comparison of different forms of intervention (2) or approaches to the problem (3) – 1 would tend to compare the (new) intervention with usual practice/status quo (a ‘control’)

4. Identify how the findings could be used In what circumstances could the findings be of value? Consider the scope of action possible for the main audience if the findings are favourable and if they are unfavourable to the policy – need to define in advance what ‘success’ or ‘favourable’ would look like If they are not favourable, what room for manoeuvre might key decision makers have?

5. Anticipate that setting up pilots will take longer than expected Setting up pilots locally can take a lot longer than expected – the degree of novelty and change required is often underestimated – This is particularly important for outcome evaluation It is important to understand causes of delays – Policy that is intrinsically or contextually unsuitable versus lack of skills among implementers Working out the ingredients of a programme (i.e. by describing activities needed in individual sites for implementation) often takes time but is likely to pay off in the long term

7. Obtain and maintain commitment from pilot sites There are strong incentives on sites to volunteer (kudos, interest in promoting change locally, additional funding if available) – but these are often not matched by the level of commitment needed throughout the duration of the programme and its evaluation Balance of central input and local scope for trial and error needs to be considered – If outcome evaluation is the aim, the scope for local trial and error of implementation is smaller, i.e. more input is needed (including researcher control over data collection)

8. If you consider an RCT, think about the implications Requirements of robust outcome evaluation – Clarity about intervention mechanism(s) – A degree of policy/programme stability or consistency across sites – Scale to achieve statistical significance Additional requirements for RCTs – Ability to maintain genuine uncertainty (equipoise) – Ability of researchers to control recruitment Implementing policy as RCT can reduce its odds of success – it may reduce implementers’ confidence in the policy and thereby reduce the vigour with which it is implemented

10. Anticipate that one evaluation is unlikely to produce definitive answers Scientific reasons – Findings will be context dependent – Better knowledge tends to generate more questions Policy reasons – Complexity of many policy innovations and range of implementation settings unlikely to be addressed in a single study no matter how large or comprehensive Political reasons – Conflicts over policy goals and underlying values of policy will persist irrespective of evidence – The debate will include the qualities of the study

But this is not a counsel of despair Evaluation of policy can still provide substantial insight and illumination to guide future decisions

Sources Ettelt S, Mays N, Allen P. (2015) The multiple purposes of policy piloting and their consequences: Three examples from national health and social care policy in England. Journal of Social Policy 44 (2): Ettelt S, Mays N, Allen P. (2015) Policy experiments – investigating effectiveness or confirming direction. Evaluation (in press) Ettelt S, Mays N. (2015) RCTs - how compatible are they with contemporary health policy-making? British Journal of Healthcare Management (in press) HM Treasury (2011) The Magenta Book: guidance for evaluation. London: HM Treasury MRC (2008) Developing and evaluating complex interventions: new guidance. London: Medical Research Council MRC (2015) Process evaluation of complex interventions. London: Medical Research Council