Presentation is loading. Please wait.

Presentation is loading. Please wait.

Medical Errors: Causes and Prevention

Similar presentations


Presentation on theme: "Medical Errors: Causes and Prevention"— Presentation transcript:

1 Medical Errors: Causes and Prevention
Roger L. Bertholf, Ph.D. Associate Professor of Pathology University of Florida Health Science Center/Jacksonville

2 IOM: To Err Is Human: Building a Safer Health System (2000)
Frequency Cost Outcomes Types Causes Recommendations

3 Adverse Event vs. Error An adverse event is an injury caused by medical management rather than the underlying condition of the patient. An adverse event attributable to error is a "preventable adverse event." Negligent adverse events represent a subset of preventable adverse events that satisfy legal criteria used in determining negligence (i.e., whether the care provided failed to meet the standard of care reasonably expected of an average physician qualified to take care of the patient in question).* An error is defined as the failure of a planned action to be completed as intended (i.e., error of execution) or the use of a wrong plan to achieve an aim (i.e., error of planning). *About half of preventable AEs are considered negligent

4 Examples of Medical Errors
Diagnostic error (inappropriate therapy) Equipment failure Infection (nosocomial, post-operative) Transfusion-related injury Misinterpretation of medical orders System failures that compromise diagnostic or treatment processes.

5 Frequency of Medical Errors
Study AEs Errors Fatal Est. Deaths NY (1984) 2.9% 58% 13.6% 98,000 CO/UT (1992) 3.7% 53% 6.6% 44,000† †MVA = 43,000; Breast CA = 42,000; AIDS = 16,000. 8th most frequent cause overall.

6 How reliable is this estimate?
Includes only AEs producing a specified level or harm Two reviewers had to agree on whether an AE was preventable or negligent Included only AEs documented in the patient record* *Some studies, using other sources of information about adverse events, produced higher estimates.

7 Cost Adverse events: $37.6 – 50 billion*
Preventable adverse events: $17 – 29 billion Half of cost is for health care Represent 4% (AE)† and 2% (errors) of all health care costs *lost income, lost household production, disability, health care costs †Exceeds total cost of treating HIV and AIDS

8 Causes* Medication error 19%† Wound infection 14%
Technical complications 13% *Leape et al. (1991) The nature of adverse events in hospitalized patients (1,133 AEs studied in 30,195 admissions) †Overall frequency (inpatients) is 3 per 1,000 medication orders; 2 per 1,000 considered “significant” errors

9 AHA List of Medication Errors
Incomplete patient information Unavailable drug information (warnings) Miscommunication of medication order Confusion between drugs with similar names Lack of appropriate drug labeling Environmental conditions that distract health care providers

10 Most Common Medication Errors
Failure to adjust dosage in response to a change in hepatic/renal function 13.9% History of allergy to the same or related medication 12.1% Wrong drug name, dosage form, or abbreviation on order 11.4% Incorrect dosage calculation 11.1% Atypical or unusual critical dosage consideration 10.8%

11 A Comparison of Risks Risk (per flight) of dying in a commercial airline accident 1 in 8 million* Risk (per hospital admission) of dying from a medical error >1 in 1,000 *1 in 2 million from

12 Six Sigma Quality Control
Quality Management program designed by Mikel Harry and Richard Schroeder in 2000 Strives to make QM a quantitative science Sets performance standards and goals for a production process

13 Six Sigma Paradigm: DMAIC

14 Six Sigma Process Performance
Target - Tolerance + Tolerance Probability .67 .95 -6 -5 -4 -3 -2 -1 1 2 3 4 5 6 SD ()

15 Six Sigma Performance Goal is to achieve < 1 DPM
Not all processes can achieve the 6 level of performance “Deming’s Principle” is that fewer defects leads to increased productivity, efficiency, and lower cost

16 Healthcare’s Six Sigma Performance
Process % Errors Sigma Preventable adverse events 3.0 2.5 Lab order accuracy 1.8 3.6 Reporting errors 0.048 4.8 False negative PAP 2.4 3.45 Unacceptable specimen 0.3 4.25 Duplicate test orders 1.52 3.65

17 What Causes Accidents?

18 Sidney Dekker “What is striking about many accidents is that people were doing exactly the sorts of things they would usually be doing—the things that usually lead to success and safety. . .Accidents are seldom preceded by bizarre behavior.” From The Field Guide to Human Error Investigations (2002)

19 A Primer on Accident Investigation
Human error as a cause Human error as a symptom

20 Human Error Bad Apple Theory Reaction to failure
Complex systems are inherently safe Human intervention subverts the inherent safety of complex systems Reaction to failure Bad outcome = bad decision Retrospective, proximal, counterfactual, and judgmental

21 The Bad Apple Theory The illusion of success Failure is an aberration
Bad procedures often produce good results Success breeds confidence Failure is an aberration “The system must be safe” The economical answer It is easier to change human behavior than it is to change systems

22 Assigning Blame Retrospective

23 Retrospective Analysis
Time

24 Assigning Blame Retrospective Proximal

25 Proximity It is intuitive to focus on the location where the failure occurred “Sharp end” vs. “Blunt end” The “sharp end” is the point at which the failure occurs The “blunt end” is the set of systems and organizational structure that supports the activities at the “sharp end”

26 Retrospective Analysis
Sharp End Time Institution Systems Procedures Organization Blunt End

27 Assigning Blame Retrospective Proximal Counterfactual

28 What Might Have Been. . . In retrospect, it is always easy to see where different actions would have averted a bad outcome In retrospect, the outcome of any potential action is already known “Counterfactuals” pose alternate scenarios, which are rarely useful in determining the true cause

29 Assigning Blame Retrospective Proximal Counterfactual Judgmental

30 The Omniscient Perspective
As an investigator, you always know more than the participants did It is very difficult, if not impossible to judge fairly the reactions of those who had less information than you Investigators define “failure” based on outcome

31 Lessons for Investigators
There is no “primary” cause Every action affects another There is no single cause Errors in complex systems are nearly always multi-focal A definition of “human error” is elusive Definition of “error” Humans operate within complex systems

32 Failure Mode and Effects Analysis
Everything will eventually fail Humans frequently make errors The cause of a failure is often beyond the control of an operator

33 10 Steps for FMEA Review the process
Brainstorm potential failure modes List potential effects of each failure mode Assign a severity rating Assign an occurrence rating Assign a detection rating Calculate the risk priority number for each effect Prioritize these failure modes based on the RPN and severity Take action to reduce or eliminate the high-risk failure modes Recalculate the RPN

34 Ranking the Failure Modes
Calculate the RPN Rate Severity, Occurrence, and Detection on a scale of 1 – 10 RPN = S x O x D (maximum 1000) Prioritize Failure modes Not strictly based on RPN Severity of 9 or 10 should get priority Goal is to reduce RPN

35 STRETCH!

36 Case Exercise #1 A 91-year-old female was transferred to a hospital-based skilled nursing unit from the acute care hospital for continued wound care and intravenous (IV) antibiotics for methicillin-resistant Staphylococcus aureus (MRSA) osteomyelitis of the heel. She was on IV vancomycin and began to have frequent, large stools.

37 Case Exercise #1 The attending physician ordered a test for Clostridium difficile on Friday, and was then off for the weekend. That night, the test result came back positive. The lab called infection control, who in turn notified the float nurse caring for the patient. The nurse did not notify the physician on call or the regular nursing staff. Isolation signs were posted on the patient's door and chart, and the result was noted in the patient's nursing record. Each nurse who subsequently cared for this patient assumed that the physician had been notified, in large part because the patient was receiving vancomycin. However, it was IV vancomycin (for the MRSA osteomyelitis), not oral vancomycin, which is required to treat C. difficile.

38 Case Exercise #1 On Monday, the physician who originally ordered the C. difficile test returned to assess the patient and found the isolation signs on her door. He asked why he was never notified and why the patient was not being treated. The nurse on duty at that time told him that the patient was on IV vancomycin. The float nurse, who had received the original notification from infection control, stated that she had assumed the physician would check the results of the test he had ordered. Due to the lack of follow-up, the patient went three days without treatment for C. difficile, and continued to have more than 10 loose stools daily. Given her advanced age, this degree of gastrointestinal loss undoubtedly played a role in her decline in functional status and extended hospital stay.

39 Case Exercise #1 What are the systems/processes involved in this incident? What were the failure points?

40 Analysis MD failed to check the result of an ordered test
Float RN wrongly assumed that MD had been notified of the result RN incorrectly assumed that IV vancomycin was adequate therapy

41 Failure Points Laboratory system for reporting critical results
Is a positive C. difficile culture considered a panic result? To whom are panic values reported? RN/MD communication Does the institution foster an environment where RNs can comfortably question MD orders?

42 Lisa Belkin “. . . it is virtually impossible for one mistake to kill a patient in the highly mechanized and backstopped world of a modern hospital. A cascade of unthinkable things must happen, meaning catastrophic errors are rarely a failure of a single person, and almost always a failure of a system.” From How Can We Save the Next Victim? (NY Times Magazine, June 15, 1997)

43 Case Exercise #2 An 81-year-old female maintained on warfarin for a history of chronic atrial fibrillation and mitral valve replacement developed asymptomatic runs of ventricular tachycardia while hospitalized. The unit nurse contacted the physician, who was engaged in a sterile procedure in the cardiac catheterization laboratory (cath lab) and gave a verbal order, which was relayed to the unit nurse via the procedure area nurse. Someone in the verbal order process said "40 of K." The unit nurse (whose past clinical experience was in neonatal intensive care) wrote the order as "Give 40 mg Vit K IV now."

44 Case Exercise #2 The hospital pharmacist contacted the physician concerning the high dose and the route and discovered that the intended order was "40 mEq of KCl po." The pharmacist wrote the clarification order. However, the unit nurse had already obtained vitamin K on override from the Pyxis MedStation® (an automated medication dispensing system) and administered the dose intravenoustly (IV). The nurse attempted to contact the physician but was told he was busy with procedures. A routine order to increase warfarin from 2.5 mg to 5 mg (based on an earlier INR) was written later in the day and interpreted by the evening shift nurse as the physician’s response to the medication event. The physician was not actually informed that the vitamin K had been administered until the next day. Heparin was initiated and warfarin was re-titrated to a therapeutic level. The patient’s INR was sub-herapeutic for 3 days, but no untoward clinical consequences occurred.

45 Case Exercise #2 What are the systems/processes involved in this incident? What were the failure points?

46 Analysis Verbal orders Failure to question unusual orders
Third party “messengers” Use of abbreviations Failure to question unusual orders Lack of control over medication availability

47 Failure Points Hospital policy for medication orders
“Read Back” requirement Ability to circumvent pharmacist review

48 J.C.R. Licklider ( ) “It seems likely that the contributions of human operators and [computers] will blend together so completely in many operations that it will be difficult to separate them neatly in analysis.” From Man-Computer Symbiosis (1960)

49 Anatomy of a Laboratory Error

50 Phase I: A failed calibration
Recalibration of the acetaminophen assay was prompted by a QC failure Recalibration was followed by acceptable QC results

51 Phase II: QC failures Subsequent QC measurements produced an error code indicating the result was above the linear limit of the method QC failures went unnoticed, since the LIS did not display the error code Several patient specimens were reported incorrectly, resulting in inappropriate treatment

52 Phase III: Discovery The ED staff contacted the laboratory to question the high acetaminophen result on a patient who denied recent ingestion of the drug Investigation revealed the QC failures, and the assay was successfully recalibrated

53 Phase IV: Investigation Principal Questions
Why was an acceptable QC result obtained immediately after a failed calibration? Why didn’t the technologists notice subsequent QC failures? Should the clinicians have been more suspicious of unusually high results?

54 The Process

55 Failure Points in The Process

56 Unrecognized calibration failure
Roche modular Throughput/timing algorithm

57 Unnoticed QC failures Interface through Digital Innovations box
Error codes are rare in QC results Supervisory review does not occur regularly on weekends

58 Lack of clinical suspicion
History is often unreliable in overdose cases An antidote for acetaminophen exists Symptoms of acetaminophen toxicity may not appear until after the window of therapeutic opportunity has passed

59 Conclusions An unexpected error occurred in the calibration algorithm encoded in the instrument software The failure of information to cross the instrument/LIS interface masked the erroneous control results Suspect results were not immediately apparent to clinicians

60 Lessons Complex technologies always have unexpected failure modes
Interfaces between systems and operators are opportunities for distortion or loss of important information The fallacy of the “un-rocked boat”

61 Richard I. Cook “Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure.” From How Complex Systems Fail (2002)

62 How Complex Systems Fail
Complex systems are intrinsically hazardous systems Complex systems are heavily and successfully defended against failure Catastrophe requires multiple failures—single point failures are not enough Complex systems contain changing mixtures of failures latent within them

63 How Complex Systems Fail
Catastrophe is always just around the corner Post-accident attribution to a “root cause” is fundamentally wrong Human operators have dual roles: as producers and as defenders against failure Human practitioners are the adaptable element of complex systems

64 How Complex Systems Fail
Change introduces new forms of failure Safety is a characteristic of systems and not of their components Failure-free operations require experience with failure

65 IOM Recommendations Establish national focus
Identify and learn from medical errors through mandatory reporting Raise standards and expectations Implement safe practices

66 AHRQ Safety Recommendations for Patients
Ask questions if you have doubts or concerns Keep and bring a list of ALL the medicines you take Get the results of any test of procedure Talk to your doctor about which hospital is best for your health needs Make sure you understand what will happen if you need surgery


Download ppt "Medical Errors: Causes and Prevention"

Similar presentations


Ads by Google