Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Better Care Together

Similar presentations


Presentation on theme: "Evaluating Better Care Together"— Presentation transcript:

1 Evaluating Better Care Together
Dr Tom Grimwood Health and Social Care Evaluations (HASCE) University of Cumbria

2 The Evaluation Project
Aims, objectives and outcomes The methods we use Feedback loops What we have been finding

3 Aims, objectives and outcomes
Working alongside quantitative metrics collected by UHMB Business Intelligence, HASCE are evaluating the qualitative aspects of BCT. We are: Conducting semi-structured interviews and focus groups Surveying populations Working on contextual analyses Examining different forms of resource use Facilitating workshops on evaluation and its role in BCT In order to provide formative and summative evaluation of the New Care Model, as well as ensuring evaluation continues to be central to the programme.

4 The questions we are answering
What is the context (e.g. history, culture, economic and social circumstances, relationships, health inequalities, local and national policies, national legislation) in each vanguard into which new care models have been implemented? What key changes have the vanguards made and who is being affected by them? How have these changes been implemented? Which components of the care model are really making a difference? What is the change in resource use and cost for the specific interventions that encompass the new care models programme locally? How are vanguards performing against their expectations and how can the care model be improved? What are the unintended costs and consequences (positive or negative) associated with the new models of care on the local health economy and beyond?

5 The questions we are answering (cont.)
What expected or unexpected impact is the vanguard having on patient outcomes and experience, the health of the local population and the way in which resources are used in the local health system e.g. equality? What is causing the outcomes demonstrated in particular elements of the programme, systems, patients or staff? What are the ‘active ingredients’ of a care model? Which aspects, if replicated elsewhere, can be expected to give similar results and what contextual factors are prerequisites for success?

6 The methods we use The aim of the evaluation is not to tell us simply ‘what works’. Rather, it should tell us: “What works in which circumstances, and for whom?” Or: “What works, for whom, in what respects, to what extent, in what contexts, and how?” Frontline evaluation is interlinked with workshops and an on-line webfolio, in order to ensure its robustness, and work to develop a longer-term sustainable evaluation culture within the Morecambe Bay footprint.

7 Data collection Contextual – desk-based, using the VICTORE model (December 2016, updated regularly) Volitions, Implementation, Contexts, Time, Outcomes, Rivalries, Emergence. Process – semi-structured interviews and focus groups (ongoing) ‘Snowball’ process: following particular projects through from strategic to ground-level (where possible). ‘Joining up’ different small-scale activities to identify patterns and themes. Outcomes – large-scale survey (June 2017) Resources – synthesis of qualitative and existing data (August 2017)

8 Realist evaluation We are interested in evaluating BCT in terms of its contexts, mechanisms (enabling or disabling), and outcomes. These enable us to theorise the programme: An intervention works (or not) because actors make particular decisions in response to an intervention/programme (or not). If the right processes operate in the right conditions, then the right outcome appears. E.g. “In this context, that particular mechanism engaged these actors, generating those outcomes. In that context, this other mechanism happened, generating these different outcomes.” This allows for a ‘ground up’ evaluation of multi-faceted change programmes, alert to local needs. We identify theories from qualitative analysis, and then test it via larger-scale surveys.

9 Creating hypotheses The aim is not to replicate a ‘scientific experiment’ But to allow for the dynamics of transformative change to be captured, and to inform its development. Realist evaluation is an ongoing and dynamic ‘effectiveness cycle’ (Kazi 2003: 30)

10

11

12 Feedback loops Due to the nature of BCT and its implementation, the evaluation must be iterative. As such, the evaluation design contains several feedback loops: Within the research team Between researchers and participants/stakeholders Via informal and scoping discussion Via discussion with BCT Research and Evaluation Group Via workshops Via on-line webfolio

13 Synthesis of Programme Evaluation
with National Metrics Programme-Wide Analysis Engagement Workshops Synthesis of Data into Overall Themes Programme-Wide Response to Evaluation Questions Findings inform further data collection Data Analysis and Emergent Findings Mixed methods data collection Formative Workshops

14 What we have been finding
What key changes have the vanguards made and who is being affected by them? As might be expected, each change tends to have both enabling and disabling aspects. Positive views about multi-disciplinary partnerships; wariness of increasing workloads in some areas. Clarity of roadmap and direction of travel, but less clarity on implementation and end- point. Some of the bigger changes involve smaller mechanisms: e.g. improved communications on the ground level. How can the effects of these be captured?

15 What we have been finding
What is causing the outcomes demonstrated in particular elements of the programme, systems, patients or staff? A lot of discussion from participants around what the right kind of outcomes are. For example, cultural change (e.g. a change of focus to health and wellbeing) may not be reflected in targets. How can these successes be recognised, identified, and validated? Importance of cultural perception of place: past interventions, funding distribution, funding longevity, etc. How much causal effect is this having? Importance of leadership at every level. Importance of personalities (harder to replicate!).

16 What we have been finding
What are the ‘active ingredients’ of a care model? There is a tension between the time it takes for change to happen, and the measures this may require, with more immediate pressures (e.g. financial). How is this balanced? One active ingredient is anxiety: including localised concerns around resources, recruitment and attrition, etc. Another very active ingredient is the role of technology, and how new ways of working are implemented.

17 Demographics across ICCs Intervention logic
Contexts Mechanisms Outcomes Demographics across ICCs Intervention logic Quantitative metrics (e.g. reductions in outpatient appts) Roles and levels of engagement with BCT Integrating care Accountability Recruitment and Attrition Patient-centred approaches Measures of change Enthusiasm for BCT Working with local populations Types of ‘success’ Resistance/cynicism to change Relationships (inter- and intra-organisational), including information sharing Shared understanding of roles, responsibilities and goals Perceived distance from decision-making Leadership Positive feedback from staff/patients Time Available Use of technology Sustainability Funding Organisational architecture Shifting care to the community/ increased pressure outside of acute care Communication


Download ppt "Evaluating Better Care Together"

Similar presentations


Ads by Google