Download presentation
Presentation is loading. Please wait.
Published byToby Thomas Modified over 7 years ago
1
Preventing Errors: Safety for Patients and Caregivers
Stephen R Czekalinski MBA, RN, BSN
2
Agenda Patient/Staff safety in healthcare Where we started
How far have we come? Next Steps Culture Human factors ergonomics (HFE) Questions and such
3
Common Assumptions in Healthcare
Errors are personal failings Someone must be at fault Healthcare professionals resist change
4
How Safe is Healthcare? Health Care Driving In US Scheduled Commercial
100,000 10,000 1,000 100 10 1 Dangerous (>1/1,000) Health Care (1 of 616) Ultra Safe (<1/100K) Driving In US Scheduled Commercial Airlines Total lives lost per year Chartered Flights European Railroads Mountaineering "How Many Bungee Jumping Deaths Happen Each Year?." Wikianswers (2009): n. pag. Web. 8 Oct < The British government, comparing the risks of various activities, assembled these statistics: * Maternal death in pregnancy 1 in 8,200 maternities * Surgical anesthesia 1 in 185,000 operations * Hang-gliding 1 in 116,000 flights * Scuba Diving 1 in 200,000 dives * Rock climbing 1 in 320,000 climbs * Canoeing 1 in 750,000 outings * Fairground rides 1 in 834,000,000 rides * Rail travel accidents 1 in 43,000,000 passenger journeys * Aircraft accidents 1 in 125,000,000 passenger journeys Chemical Manufacturing Nuclear Power Bungee Jumping 1 10 100 1,000 10,000 100K 1M 10M Number of encounters for each fatality 4 4
5
Patient Safety 48,000-98,000 lives lost every year
6
We all make errors! In Healthcare….Hospital Acquired Conditions
Transfusion reaction (wrong blood) Medication event Misdiagnosis Hospital-Acquired Infection Treatment error Delay in treatment Wrong site/side surgery or procedure Fall with serious injury
7
Josie King - Died February 22, 2001
8
High reliability organizations (HROs)
“operate under very trying conditions all the time and yet manage to have fewer than their fair share of accidents.” Managing the Unexpected (Weick & Sutcliffe) Time: 30 secs Key Points: Key point: Our consultants at HPI are experts in the study of High Reliability Organizations, including naval aviation, the airline industry and nuclear power. We are trying to reduce our safety risk just as they have successfully done. Risk = Probability of an event occuring x the Consequence if that event occurred We can reduce risk by either reducing the probability of an event from happening or limiting its consequence. Oftentimes we don’t have a lot of control over the consequence, e.g. if a plane crashes, people will die; if we give too much potassium, a patient will have a dysrhythmia. We do have control over the probability or the likelihood of the event occurring. HROs focus on the probability variable in the equation to minimize risk. By decreasing the probability of an accident, HRO’s recast a high-risk enterprise as merely a high-consequence enterprise. HROs operate as to make systems ultra-safe.
9
Optimized Outcomes Reliability We are here! Human Factors Integration
10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 Intuitive design Impossible to do the wrong thing Obvious to do the right thing Human Factors Integration Reliability Culture Core Values & Vertical Integration Hire for Fit Behavior Expectations for all Fair, Just and 200% Accountability Reliability We are here! Process Design Evidence-Based Best Practice Focus & Simplify Tactical Improvements (e.g. Bundles) Time: 2 min Key Point: Reliability is measured using the scale on the “Y axis.” means one time out of 10 the performance is substandard or 9 times out of 10 the performance is as desired. 10 – 2 means one time out of 100 the performance is substandard (99 out of 100 it is done correctly). 10 – 6 means one time out of 1,000,000 the performance is sub-standard. As you go up the scale reliability improves with an attendant improvement in safety, quality, etc. Commercial aviation operates at 10-7 reliability regarding passenger safety (there is a one in 10 million chance of dying in a plane crash on a commercial airliner). Healthcare safety is typically or 10-3… orders of magnitude less safe than commercial aviation or nuclear power operations. Three means to achieve higher reliability: Process Design – minimize variation (e.g. deployment of bundles or specifying best practice. Reliability Culture – strong human performance reliability that minimizes human error (this is the primary area of focus in the OCHSPS initiative. Human Factors Integration – designing the system to remove the human variability from the activity – i.e. make it virtually impossible to do the wrong thing and easy to do the right thing. © 2010 Healthcare Performance Improvement, LLC. ALL RIGHTS RESERVED.
10
Significant events or injuries
Poorly designed processes or active errors within a well designed process Significant events or injuries Active errors by individuals result in initiating action(s) Time: <2 min Key Point: Swiss Cheese Model- there are processes and systems in place to protect human error, not all systems or processes are designed well. It is not just one person making a mistake, we are one team and at any point care, someone has the options to Question the process and/or system. If harm reaches a child, we’ve all made a mistake! Why Significant Events Happen The Swiss Cheese Effect, a model that explains how human error results in events of harm, was developed by James Reason, a psychologist studying events in aviation. In most everything we do, there are checks and barriers built designed to help catch errors and prevent them from resulting in events. This is called defense-in-depth. The slices of Swiss cheese represent that defense-in-depth. The healthcare system is designed wherever possible with defense-in-depth such that single human errors do not result in harm. Good defenses can include technology (a pharmacy order entry system that warns the Pharmacist of potential drug interactions); processes (operative site verification procedure that requires the OR team to pause before making the incision to confirm the correct site); or other people (a coworker who sees us prepare to care for a patient without washing our hands and stops to remind us). Sometimes, however, those good checks and barriers fail. Those failures are seen as the holes in the Swiss cheese. When all our best defenses fail us, an error that otherwise would have been caught carries through the holes unstopped and results in an event. On average, a Serious Safety Event is the result of 8.3 human errors. In our review of safety events here at Rainbow, harm is never the result of one person’s error. In one child’s case review, we identified 16 opportunities in which we failed to use the appropriate safety net to prevent the harm from reaching the patient. Safety doesn’t just happen. We have to work to make it happen. There are two basic approaches to reducing event rate. First, reduce the human error rate. Second, find and fix the holes in the Swiss cheese. Continuous improvement approaches to system reliability are designed to reduce event rate by 50% in two years. Rainbow plans to continue finding and fixing the system problems, implement Behavioral Expectations for human error prevention, and measure the results through a reduction in Serious Safety Event Rate and an increase in the number of event-free days. “Safety is a Dynamic Non-Event” Slide concept adapted from James Reason, Managing the Risks of Organizational Accidents, 1997
11
As Humans, We Work in 3 Modes
Skill-Based Performance “Auto-Pilot Mode” Rule-Based Performance “If-Then Response Mode” Knowledge-Based Performance “Figuring It Out Mode” Time: < 1 min Key Point: Covered in detail in the next three slides. Our interventions will be targeted at these three types of human performance errors
12
Example Error Prevention Strategy
What We’re Doing At The Time We are doing tasks so routine and familiar that we don’t even have to think about the task while we are doing it. Skill Based Errors Type of Error Example Error Prevention Strategy Slip – act performed wrong Stop and think before acting Lapse – act not performed when required Time: 2 min Key Point: Minimal error in this human performance mode. What the human mind wants more than anything is to minimize mental effort. This is a coping strategy to deal with a complex and fast-paced world. Routine actions in familiar environments are performed with little or no thought based on learned skills. The probability of an error when you are in skill-based performance is approximately 3 errors for every 1000 actions or 0.3%. Since this is very reliable and skill-based performers are experts in the action, errors are easily identified. The probability of detecting one’s own skill-based error is approximately 60%. The surprise of detecting one’s error in a such a simple action leads to a slap on the forehead, the international skill-based error symbol. Every day each person performs 10,000 skill-based acts. This would lead to roughly 30 skill-based errors per day. Slip: Resident consenting for multiple patient’s procedures at one time; following routine daily am process, unknowingly fails to match the patient with the correct mother Lapse: set up iv medication and program the iv pump; leave the room without connecting the tubing to the patient; return to room to find medication infusing on the floor Fumble: incident in which the epidural medication tubing was connected to the peripheral IV and infused via the incorrect route 12
13
Example Error Prevention Strategy
Rule Based Errors What We’re Doing At The Time We choose how to respond to a situation using a principle (rule) we were taught, told or learned through experience. Type of Error Example Error Prevention Strategy Used the wrong rule Education about the correct rule Misapplied a rule Think a second time – validate/verify Chose not to follow the rule Reduce burden, increase risk awareness, improve coaching Time: 2 min Key Point: Our hospital makes the most errors while in Rule based performance mode. Rule-based errors occur in choices made from learned operating principles or rules. In this usage, “rule” is bigger than policy or law. Rules describe our knowledge of how the world works. We have rules for everything – oil and water do not mix, everything that goes up must come down Rule-based errors occur in three varieties. Wrong rule errors occur when the wrong answer is learned as the right answer. For example, the preceptor trains the orientee the incorrect method for a task. Misapplication of correct rules occurs when thinking becomes confused. This is never a knowledge problem; this is a critical thinking problem. For example: I am told to call PACT consults when my patient condition is decompensating; I now always call PACT and don’t activate a code blue when necessary. 2nd example: potassium dose is 1meq/kg – I order 50 meq for my 50 kg patient. (rule is 1meq/kg up to max of 20meq dose). Non-compliance occurs when the rule is known (and considered at the time) but a choice is made to do otherwise, thinking that a better result can be achieved with the same or less effort. For example, choosing not to label syringes being used at the bedside or not double checking the patient’s arm band before administering the medication. The probability of a rule-based error is 1 in 100 or 1%. The probability of self-detecting a rule-based error is 1 in 5. For wrong rule error, teach your colleague the correct rule. For misapplication, coach your colleague on critical thinking skills. For non-compliance, coach your colleague by either reinforcing a professional standard or teaching the consequences of the non-compliance. Rainbow Example: Don’t follow the 5 rights of medication administration; hang the antibiotic labeled for another patient 13
14
Knowledge Based Errors
What We’re Doing At The Time We’re problem solving in a new, unfamiliar situation. We don’t have a skill for the situation, we don’t know the rules, or no rule exists. So we come up with the answer by: Using what we do know (fundamentals) Taking a guess Figuring it out by trial-and-error Type of Error Example Error Prevention Strategy We came up with the wrong answer (a mistake) STOP and find an expert who/that knows the right answer Time: 2 min Key Point: Knowledge-based errors occur in choices where rules do not exist or are unknown to the performer. This error type is better called “lack of knowledge” error. Knowledge-based errors are associated with performers working outside their practice or facing a very complex case. This is when we are “Winging It” The probability of a knowledge-based error is 3 in 10 or 30%. The self-detection probability is only 11%. You know you are in knowledge-based thinking when you start to question what to do next. This is to be avoided. Make rule-based judgments alone. Make knowledge-based decisions with friends. Physicians have an excellent strategy for preventing knowledge-based error: they consult. If the generalist does not know, then consult the specialist. The best strategy to prevent knowledge-based error is not to do it. Change your knowledge-based error into someone else’s rule-based success. Rainbow Example: employee in orientation, left to work unattended by preceptor and proceeded in the face of uncertainty, ultimately performing the task incorrectly. 14
15
Types of errors Active errors
Also sharp end errors – the slips, lapses, or mistakes that are at the end of the causal chain of events Latent errors Also called system errors or system factors or blunt end errors – these are poor designs that set people up to make mistakes Equipment design flaws that make the human-machine interface less than intuitive (as mentioned in the bed example earlier) Organizational flaws, such as staffing decisions made for fiscal reasons which increase the likelihood of error
16
Exercise
17
Human Error Again, human error has received enormous attention in the context of patient safety The concept and theories are often misapplied and misunderstood The result is in appropriate “solutions” that do not actually solve anything Think about those active errors!
18
Errors vs Hazards
19
So have we made a difference?
20
Is Patient Safety Improving?
21
Is Patient Safety Improving?
22
Is Patient Safety Improving?
Maybe, maybe not – but there’s room for improvement Let’s talk about some of that…
23
Competing priorites
24
Culture
25
Culture – what is culture?
26
Culture
27
Culture Ross, C. (2017, Mar. 20). When Hospital Inspectors are in Town, Fewer Patients Die, Study Says. Boston, MA: STAT. Retrieved from: 3500 lives a year – saved if we tidied up like we do when the joint commission is here
28
The next frontier
29
High Reliability Three Principles of Anticipation-
1. Preoccupation with Failure 2. Sensitivity to Operations 3. Reluctance to Simplify Two Principles of Containment- 4. Commitment to Resilience 5. Deference to Expertise Karl E. Weick and Kathleen M. Sutcliffe
30
#1 People Make Mistakes #2 People Drift
Key Lesson: Do not detect drift through actual events. Find drift before it finds you. As safety coaches at the bedside 24x7, you have the opportunity to see coworkers using or drifting from the expected behaviors. You have the best opportunity to detect the drift and move your team members’ focus back to safety as the priority. Sidney Dekker Associate Professor Centre for Human Factors in Aviation Linköping Institute of Technology, Sweden 30 30 30 © 2009 Healthcare Performance Improvement, LLC. ALL RIGHTS RESERVED. This material is a proprietary document of Healthcare Performance Improvement LLC. Reproducing, copying, publishing, distributing, presenting, or creating derivative work products based on this material without written permission from Healthcare Performance Improvement is prohibited.
31
Safety Culture Composites, 2017
Your Hospital's Composite Score Average % of positive responses National Benchmark (50th percentile) Overall Perceptions of Safety (4 items--% Agree/Strongly Agree) 66% Frequency of Events Reported (3 items--% Most of the time/Always) 67% Supervisor/Manager Expectations & Actions Promoting Patient Safety (4 items--% Agree/Strongly Agree) 79% Organizational Learning--Continuous Improvement (3 items--% Agree/Strongly Agree) 73% Teamwork Within Units (4 items--% Agree/Strongly Agree) 82% Communication Openness (3 items--% Most of the time/Always) 64% Feedback & Communication About Error (3 items--% Most of the time/Always) 68% Nonpunitive Response to Error (3 items--% Agree/Strongly Agree) 44% Staffing (4 survey items--% Agree/Strongly Agree) 53% Hospital Management Support for Patient Safety (3 items--% Agree/Strongly Agree) Teamwork Across Hospital Units (4 survey items--% Agree/Strongly Agree) 61% Hospital Handoffs & Transitions 4 survey items--% Agree/Strongly Agree) 46%
32
Culture of Blame A barrier to our reporting goals Differences within the hospital, within departments, and within divisions or wards Culture doesn’t change overnight Need to show that reporting yields positive change So I think this was a big barrier for us, and likely for many or your hospitals. We talk about incident reports, for example, and some of our staff see these reports as “write ups”. I remember my time as a new nurse in our ICU. I’d hear about some of the other nursing staff say that they were “filing out an incident report on that respiratory therapist” for giving an aersol treatment later than the ordered time, or for “the lazy night shift nurse leaving several tasks for the dayshift nurse to do instead of getting them done herself” This has gotten much better over the past several years, and we see more and more staff filing reports based on errors that they’ve made. And every time I read one of these reports, it truly is a small victory. Staff bringing up concerns because they know that the issues that they come across can happen to other employees – they want their transparency to help others. Awesome! However, we know that culture doesn’t change overnight. And the successes we’ve seen on certain divisions don’t occur everywhere. There are still areas of the hospital, and a variety of staff members that see incident reporting in a negative light. So there’s room for improvement there. And we do all that we can to make sure that our staff know that we do our best to work in a blame free – environment. It’s not to say that we allow employees to be careless, but we understand that as humans we make errors, and after the errors occur, the most important thing for our hospital is to hear about those errors, figure out why they occur, and create a plan to fix the problems we have out there. We do this from the start. It’s discussed at hospital orientation – stressed to managers and supervisors, and staff are reassured when their concerns and addressed without being punative.
33
Creating a culture of safety
This may have been were we were in the past. We used to say that, “we are tracking and trending” all concerns. But were we? Gosh I don’t think we tracked or trended, until it was too late, and we had one of those dejavu events. The only reason staff report concerns, or give suggestions, is to change the environment or the processes that they work in. The second we stop listening to our employees is the second our employees stop reaching out to us and telling us what’s happening. And if we don’t have that inside source, that expert on the front lines, we don’t have the understanding of the system that we need so badly. And like all of you know, this is a tough one. We really cannot take our finger off of the pulse. We need to maintain our relationships with the front lines to maintain our culture of reporting to continually enhance our QA program and make Rainbow a safer place for our patients and familes.
34
Culture of reporting Remember hazards?
35
Intuitive Design
36
Every system is perfectly designed to deliver the results it gets
37
An Alternative Approach
Human Factors Engineering / Ergonomics (proposed by the IOM and patient safety experts) So – What is it?
38
Definition of the International Ergonomics Association
Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that appies theory, principles, data and methods to design in order to optimize human well-being and overall system performance
39
But really, what is it? SCIENCE – Discovers and applies info about human behavior, abilities, limitations and other characteristics to the design of tools, machines, systems tasks, jobs, and environments for productive, safe, comfortable and effective human use Basic science of human performance PRACTICE – Designing the fit between people and products, equipment, facilities, procedures and environments Evidence based design for supports people’s physical and cognitive work
40
Optimized Outcomes Reliability We are here! Human Factors Integration
10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 Intuitive design Impossible to do the wrong thing Obvious to do the right thing Human Factors Integration Reliability Culture Core Values & Vertical Integration Hire for Fit Behavior Expectations for all Fair, Just and 200% Accountability Reliability We are here! Process Design Evidence-Based Best Practice Focus & Simplify Tactical Improvements (e.g. Bundles) Time: 2 min Key Point: Reliability is measured using the scale on the “Y axis.” means one time out of 10 the performance is substandard or 9 times out of 10 the performance is as desired. 10 – 2 means one time out of 100 the performance is substandard (99 out of 100 it is done correctly). 10 – 6 means one time out of 1,000,000 the performance is sub-standard. As you go up the scale reliability improves with an attendant improvement in safety, quality, etc. Commercial aviation operates at 10-7 reliability regarding passenger safety (there is a one in 10 million chance of dying in a plane crash on a commercial airliner). Healthcare safety is typically or 10-3… orders of magnitude less safe than commercial aviation or nuclear power operations. Three means to achieve higher reliability: Process Design – minimize variation (e.g. deployment of bundles or specifying best practice. Reliability Culture – strong human performance reliability that minimizes human error (this is the primary area of focus in the OCHSPS initiative. Human Factors Integration – designing the system to remove the human variability from the activity – i.e. make it virtually impossible to do the wrong thing and easy to do the right thing. © 2010 Healthcare Performance Improvement, LLC. ALL RIGHTS RESERVED.
41
3 main sub-disciplines Physical Ergonomics
Working postures, materials handling, repetitive movements, work-related musculoskeletal disorders, workplace layout, safety and health Cognitive Ergonomics Mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training Organizational Ergonomics Optimization of sociotechnical systems, organizational structures, policies, and processes, teamwork, scheduling, coordination/communication
42
Topics of study Usability Human-computer interaction Mental workload
Situation awareness Alerts Lifting Workstation design Training Teamwork Info processing Handoffs Interruptions and distractions Violations or workarounds Human error Safety Resilience Job stress
43
Signal detection theory
45
Take one of the most commonly discussed explanations for train accidents: "human error." While this may sound straightforward -- an accident caused by a mistake by a train operator such as a driver or engineer -- there are often a multitude of factors that led to that "error" taking place.
46
But train operators' behavior is conditioned by decisions made by work planners or managers, which might have resulted in poor workplace designs, an unbalanced workload, overly complicated operational processes, unsafe conditions, faulty maintenance, ineffective training, nonresponsive managerial systems or poor planning. As such, it is a gross oversimplification to attribute accidents to the actions of frontline operators.
47
Human Error?
48
HFE
49
HFE
50
HFE
51
HFE
52
HFE
53
HFE
54
HFE
55
HFE
56
HFE
58
A total of 16% of patients used the epinephrine autoinjector properly
A total of 16% of patients used the epinephrine autoinjector properly. Of the remaining 84%, 56% missed 3 or more steps (eFig 1A). The most common error was not holding the unit in place for at least 10 seconds after triggering. A total of 76% of erroneous users made this mistake (eFig 1B). Other common errors included failure to place the needle end of the device on the thigh and failure to depress the device forcefully enough to activate the injection. The least common error was failing to remove the cap before attempting to use the injector.
61
EpiPen “Only half of the patient with serious relapse were able to use EpiPen correctly, despite verbal and printed instructions, and practice with an EpiPen trainer” Food allergy support group – 50% had an adverse event
62
“…we strongly advocate not blaming clinicians for violations, but rather searching for a more systems-oriented causal explanation. It is, after all, the causes of violations that need remediation.”
63
BCMA (Bar Coded Medication Adminstration)
Five “rights” of medication administration Typically: Clinicians access system software using scanning devices At point of care, clinicians scan the bar code on the patient ID band and scan the bar code on the medication If there’s a mismatch between the medication and the patient, an alert from the software warns the clinician Often documentation is automated, as well
65
BCMA - Continued Tasks? Person? Environment? Technology? Organization?
66
BCMA - Continued Tasks Potentially unsafe med admin Person
Patient in isolation, asleep, etc Environment Messy, insufficient light Technology Automation surprises, malfunctioning equipment, meds don’t scan Organization Interruptions
67
BCMA Violations? So what happened? What happens?
68
Okay so now what? Focus moving forward should not be on individual error, but identification of hazards and creation of systems that support patient safety
69
Agenda Patient/Staff safety in healthcare Where we started
How far have we come? Next Steps Culture Human factors ergonomics (HFE) Questions and such
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.