Moving from Safety-I to Safety-II

Slides:



Advertisements
Similar presentations
Carper (1978) Fundamental patterns of knowing
Advertisements

SETTINGS AS COMPLEX ADAPTIVE SYSTEMS AN INTRODUCTION TO COMPLEXITY SCIENCE FOR HEALTH PROMOTION PROFESSIONALS Nastaran Keshavarz Mohammadi Don Nutbeam,
Supervisor & Managers Safety Responsibilities (R & R) Presented by Chris Lease, Safety Director.
Error Management Error management has two components: –Error reduction and –Error containment Safety is a dynamic non-event.
Modelling unknown errors as random variables Thomas Svensson, SP Technical Research Institute of Sweden, a statistician working with Chalmers and FCC in.
HSE’s Ageing and Life Extension Key Programme (KP4) and Human Factors
Human Performance and Patient Safety
Brownfields 2013 Ron Snyder, HMTRI/CCCHST Adapted from: Todd Conklin PhD Los Alamos National Laboratory.
Accident Causes, Prevention and Control
Loughborough Design School Is It Raining Out There? An exploration of culture, climate and safety. Dr Mike Fray.
Understanding systems and the impact of complexity on patient care
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 2 Slide 1 Systems engineering 1.
Maths Counts Insights into Lesson Study 1. Sandra Fay, Irene Stone, Sharon Mack First year Junior Cert An Introduction to Patterns 2.
Introduction to Social Science Research
Information security incident investigation: The drivers, methods and outcomes Matthew Trump.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Section 2: Science as a Process
Definitions of Reality (ref . Wiki Discussions)
Topic 5 Understanding and learning from error. LEARNING OBJECTIVE Understand the nature of error and how health care can learn from error to improve patient.
ESRD Network 6 5 Diamond Patient Safety Program
Topic (1)Software Engineering (601321)1 Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution.
Highlights from Educational Research: Its Nature and Rules of Operation Charles and Mertler (2002)
© 2009 On the CUSP: STOP BSI Identifying Hazards.
1 William P. Cunningham University of Minnesota Mary Ann Cunningham Vassar College Copyright © The McGraw-Hill Companies, Inc. Permission required for.
2.1 DEFINE THE PROJECT STRATEGY The project is a vehicle for the execution of strategy both organization and individual. This implies that a high level.
CROSS-CUTTING CONCEPTS IN SCIENCE Concepts that unify the study of science through their common application across the scientific fields They enhance core.
Why Risk Models Should be Parameterised William Marsh, Risk Assessment and Decision Analysis Research Group.
11/8/2015 Nature of Science. 11/8/2015 Nature of Science 1. What is science? 2. What is an observation? 3. What is a fact? 4. Define theory. 5. Define.
Hazard Identification
Quality Education for a Healthier Scotland Multidisciplinary The nature and reality of working safely Mark Johnston Training and Research Officer (Patient.
Introduction to Earth Science Section 2 Section 2: Science as a Process Preview Key Ideas Behavior of Natural Systems Scientific Methods Scientific Measurements.
1 William P. Cunningham University of Minnesota Mary Ann Cunningham Vassar College Chapter 02 Lecture Outline Copyright © McGraw-Hill Education. All rights.
Topic 3 Understanding systems and the impact of complexity on patient care.
Introduction to Science.  Science: a system of knowledge based on facts or principles  Science is observing, studying, and experimenting to find the.
Theories and Hypotheses. Assumptions of science A true physical universe exists Order through cause and effect, the connections can be discovered Knowledge.
Is an SMS enough to make our organisations safer? By Jan Peeters RIO DE JANEIRO – 20/10/2015 SRMCOACH.EU 1.
Safety-I, Safety-II, And the Messy Details of Clinical Work Robert L Wears, MD, MS, PhD University of Florida Imperial College London International System.
Research for Nurses: Methods and Interpretation Chapter 1 What is research? What is nursing research? What are the goals of Nursing research?
What is Artificial Intelligence?
Development of Research Methodologies in Various Disciplines By Dr Ranu Varshney & Mrs. Nisha Chaturbedi.
Risk Identification. Hazards and Risk Section 2: ACCIDENT THEORIES 2.1 Single Factor Theories  This theory stems from the assumption that an accident.
Chapter 1 What is Biology? 1.1 Science and the Natural World.
Paradigms. Positivism Based on the philosophical ideas of the French philosopher August Comte, He emphasized observation and reason as means of understanding.
Designing an Experiment &The Characteristics of Scientific Knowledge.
Faculty of Health and Wellbeing PhD Students' Workshop Wednesday October Introduction to research philosophies Peter Allmark PhD Centre for Health.
Enclosed Spaces – the human dimension Marc Williams Human Element Policy Manager MCA.
Managing Trading Risk. Options involve risk and are not suitable for all investors. For more information, please read the Characteristics and Risks of.
Human & Organizational Performance – H.O.P.
Centre for Applied Resilience in Healthcare Organisational resilience and patient safety Dr. Janet Anderson Centre for Applied Resilience in Healthcare.
Human Performance Improvement/ HRO
Quality improvement in action:
Introduction to Research Methodology
Safety Culture Introduction
Leacock, Warrican and Rose (2009)
Reviewing the Quality of Outpatient Care today
Chapter 7 Psychology: Memory.
Estrella Vergara EN-ACE group 24th May 2017
Section 2: Science as a Process
From counting errors to creating organisational resilience in managing safety Matita Tshabalala.
Introduction to Research Methodology
Cross-cutting concepts in science
Teaching with Instructional Software
Introduction to Research Methodology
Chapter 02 Lecture Outline
Introduction to Research Methodology
College of NURSING SCIENCE
Introduction to Research Methodology
Principles of Science and Systems
Presentation transcript:

Moving from Safety-I to Safety-II Symposium on HF &E in Health Care 12 March 2013 Moving from Safety-I to Safety-II Robert L Wears, MD, MS, PhD University of Florida Imperial College London 20 minutes! + 10 for Q&A ?requisite variety of mental model???

motivation general agreement that we are not making progress on safety as fast as we would like we have not been ‘Protestant enough’ more rigour (eg, EBM) greater accountability two possibilities not enough rigour (EBM), not working hard or conscientiously enough (accountablity) interesting to think about what’s NOT being said

motivation general agreement that we are not making progress on safety as fast as we would like wrong mental model of safety “… enduring Enlightenment projects “… rationality can create a better, more controllable world “… taken for granted by safety researchers b/ it appears so ordinary, self-evident and commonsensical.”* two possibilities -- note this reprises the fundamental attribution fallacy – we attribute failings to individual attributes, not situational ones not enough rigour (EBM), not working hard or conscientiously enough (accountablity) *Dekker 2012

“A scientific paradigm suppresses the perception of data inconsistent with the paradigm, making it hard to perceive anomalies that might lead to scientific revolution.”

effect of mental models 11 year lag in discovery of Antarctic ozone hole Meadows, Meadows, Randar 1992

patient safety orthodoxy technocratic, instrumental, ‘measure-and-manage’ approach myopic – failing to question underlying nature of problems overly simplistic – transferring sol’ns from other sectors negligent of knock-on effects of change “glosses over the complexities of health care organisation and delivery”

view from safety-I accidents come from erratic acts by people (variability, mistakes, errors, violations) study, count accidents to understand safety (tend to look backwards) focus on components safety is acquired by constraining workers via: standardisation, guidelines, procedures, rules, interlocks, checklists, barriers Taylor, Deming, Shewhart Toyota sacred texts – and sacred myths

assumptions in safety-I our systems are well-designed and well-understood procedures correct and complete systems are basically safe, well-protected reliability = predictable, invariant variation is the enemy safety is an attribute (something a system has) conditions are well-anticipated, well-specified unarticulated assumptions anticipatory model – an adjective

view from safety-II accidents are prevented by people adapting to conditions study normal work to understand safety (tends to look forward) focus on inter-relations aim is to manage, not eliminate, the unexpected safety is enacted by enabling workers via: making hazards, constraints, goal conflicts visible enhancing repertoire of responses Rasmussen, Woods, Hollnagel Three Mile Island, Tenerife good example is NASA’s ‘scheduled hold’

assumptions in safety-II our designs are incomplete, procedures out-dated our systems are poorly understood systems are basically unsafe reliability = responsiveness variation is necessary safety is an activity (something a system does) possible failure modes have not been anticipated ‘continuing expectation of surprise’ a verb

safety-II healthcare STS intractable, underspecified, variable demands resources (time, people, material, information) limited, uncertain workers adjust to meet conditions creating variability adjustments always approximate (b/ resources limited) “Knowledge and error flow from the same mental source; only success can tell one from another.” Ernst Mach, 1905 approximate adjustments usually reach goals, make things go safely approximate adjustments sometimes fail, or make things go wrong

safety-I vs safety-II summary defined by its opposite - failure well designed & maintained, procedures correct & complete people (ought to) behave as expected & trained accidents come from variability in above therefore safety comes from limiting & constraining operators via standardization, procedures, rules, interlocks, barriers critical inquiry defined by its goal - success poorly understood, incomplete, underspecified people (ought to) adjust behaviour & interpret procedures accidents come from incomplete adaptation therefore safety comes from supporting operators via making boundaries, hazards, goal conflicts visible, enhancing repertoire of responses appreciative inquiry not either-or, rather both-and

philosophical bases safety-I safety-II linear, proportional, tractable behaviour explained by reduction positivist, Taylorist cause-effect simple, oneway controllable ‘the one best way’ values declarative, technical knowledge complicated problems techne, episteme non-linear, non-proportional, intractable behaviour explained by emergence constructivist, interpretivist cause-effect multiple, reciprocal influence-able equifinality, multifinality values practice, tacit wisdom ‘wicked problems’ mētis, phronesis program of technical rationality

why safety-II? “The real trouble with this world of ours is not that it is an unreasonable world, nor even that it is a reasonable one. The commonest kind of trouble is that it is nearly reasonable, but not quite. Life is not an illogicality; yet it is a trap for logicians. It looks just a little more mathematical and regular than it is; its exactitude is obvious, but its inexactitude is hidden; its wildness lies in wait.“ G K Chesterton, 1909

why safety-II?

better fit with modern theories of accidents why safety-II? better fit with modern theories of accidents 1940 1960 1980 2000 simple, linear, chain of events complicated, interdependent complex, nonlinear, coupling, resonance, emergence

why safety-II? resilience, ‘margin for maneuver’, buffers, tradeoffs all “hidden in the interstices of complex work” focus on how ordinary work goes right less likely to inadvertently damage these hidden resources

empirical support direct observations of CV surgery surgeons w/ best results had just as many untoward events as those w/ worst but they had better means of detection greater repertoire of responses de Leval 2000 de Leval, Carthey, et al 2000

fundamental ideas not new Ernst Mach (1903) Charles Perrow (1984) Jens Rasmussen (1990, 1997) Gary Klein (1989ff) Gene Rochlin (1987, 1999) Paul Schulman (1993, 2004) Amalberti (2001) Hollnagel et al (2006ff) Berwick (2003)

from st donald in fairness, other remarks in the piece suggest Berwick hasn’t fully adopted this position, or at least shrinks a bit from its full implications. But I think he would agree with me in saying that healthcare as overlearnt a particular epistemology to the exclusion of other more useful understandings. Berwick, 2003

what makes safety-I persist? not despite the fact that it’s wrong, but precisely because it is wrong, wrong in particularly useful ways simple explanations illusion of control, ontological security removes managers, organisations from line of fire fits positivist, biomedical model ‘the nurse failed to notice …’ failure comes from aberrant people / devices, so remove, control them refitting, reorganising expensive, so re-train instead Enlightenment ‘program of technical rationality’

why HFE is a good fit for safety-II multiple philosophies of science admissible expertise in unpacking the mundane judicious valuing of practice ‘requisite variety’ of views, tools work as imagined vs work as done

perceive the invisible Insp G: Is there any point to which you would wish to draw my attention? SH: To the curious incident of the dog in the night-time. Insp G: The dog did nothing in the night-time. SH: That was the curious incident … Inspector Gregory Laura, typicality Conan Doyle, 1893

what is needed to move forward? requisite variety mental models, theories, skills, people critical mass sustained co-presence

ACEP2007 complex, adaptive, joint cognitive system where information and work is shared across people and physical and social artefacts (tools, procedures). May not be as integrated as Hutchins’ JCS of a bridge navigation, where there is no ‘spot’ where the calculation occurs, but still very similar

contact information Robert L Wears, MD, MS, PhD wears@ufl.edu r.wears@imperial.ac.uk +1 904 244 4405

empirical support NSQIPS study hospitals w/ lowest mortality had just as many complications as those w/ worst but they had earlier recognition better responses Ghaferi 2009 de Leval, Carthey, et al 2000

“Any argument for the safety of a design that relies solely on pointing to what has worked successfully in the past is logically flawed.” John Roebling

sorting out the two views resilience vs orthodox approach exploration vs exploitation prescriptive vs adaptive guidance homo- vs hetero-geneous processes centralized vs distributed control organic, evolutionary vs engineered, managerial

when HF and healthcare meet