Download presentation
Presentation is loading. Please wait.
Published byMyles Chapman Modified over 7 years ago
1
Susan D. Dooley, MHA, CMT, AHDI-F AHDI-Florida, April 29, 2017
QUALITY Susan D. Dooley, MHA, CMT, AHDI-F AHDI-Florida, April 29, 2017
2
Presentation Outline Why perform QA? Goals of QA
Factors Affecting Quality Blanks Personnel, Policies and Procedures, Sampling Guidelines Error Categories and Scoring Quick Qwiz Standardization Clinician Created Documentation Integrity Programs AHRQ study The Nonsense Record This is all that I have ambitious plans of covering in our little last hour of our state meeting. Hopefully I can get through it all. Presentation Outline
3
why perform qa?
4
We perform QA to ensure quality
QA is not done to “ding” medical transcriptionists and find ways to reduce their pay QA IS done to ensure, as much as possible, the accuracy of medical reports This is one place where accuracy is vital —lives depend on it! I wanted to put “Duh!” in that headline but I decided not to. Anyway, let me give you an example of why quality matters so much. We perform QA to ensure quality
5
Juno: A Case in Point Plexus, September-October 2013
In the Juno case, the doctor whose patient was killed by an insulin overdose said he thought the MT department was still in the basement of his hospital. He had no idea it had been outsourced overseas to Precyse. He also had no idea that the HIM department had no oversight or ongoing QA of the transcription but instead depended upon Precyse to do all the QA. His patient, Ms. Juno, died of an insulin overdose because he trusted the medical transcription department. Compelling story, isn’t it? Okay, the case was a lot more complicated than that, but in essence that’s how the winning attorneys built their cases. Now, first, let me tell you that this was one of the few healthcare documentation-related malpractice cases to actually go to trial. (There are more cases like this than we will ever know about, because the majority settle out of court and keep the final terms of the settlement confidential.) In fact, I think the only reason any of us even know about this case was because so many AHDI members were expert witnesses – including Lea Sims, the author of this article, who actually testified in court. And our own Brenda Hurley, too. In fact, this very Juno case was later appealed, which isn’t surprising with a jury-awarded $140 million award to Juno’s heirs. Once it was appealed, this case was also settled out of court with a confidential result. The jury returned with a verdict holding only the hospital and the transcription companies responsible for this outcome. In reality, Lea said, all parties were partially to blame, but in this case the two people usually most accountable for care outcomes -- the physician and the nurse -- were not included in the verdict. Lea said that the scapegoat of an offshore transcription service made it easy for the jury to ignore the responsibilities of the clinical team and cast blame on a foreign source of error. Juno: A Case in Point
6
QA practices made the case
D: Levemir 8 units SR: Levemir 80 units On the discharge summary: Levemir 80 units But, Lea said, a great deal of time was spent in depositions and in the courtroom examining AHDI’s quality guidelines to determine if the transcription companies were engaging in rigorous quality assurance practices. The plaintiff’s attorney succeeded in convincing this jury that poor quality measures and negligent quality assurance practices contributed to the error in this patient’s record. The implication was that if this hospital had entrusted its documentation to a more qualified, certified healthcare documentation team, the error likely would not have occurred and the patient would not have died as a result of the error. This slide shows the specific error: the doctor dictated a dose of 8 units, and speech wreck and the SREs put 80 units. As seductive as speech wreck can be, I’m not sure I would have caught that error either. AHDI defines a “critical error” as one that can affect patient safety, and that reinforced the plaintiff’s argument that transcriptionists should not be making documentation errors that can “kill” a patient. So let’s take a look at those QA practices that AHDI recommends. QA practices made the case
7
Just wanted to show you where you can find these materials I’m going to talk about now. AHDI’s QA Best Practices Toolkit is available to members and nonmembers alike right on the website, Just go to the main page, click on Body of Knowledge, then Best Practices, and you’ll find Quality Assurance.
8
Goals of QA Accurate and complete medical records
Timely and accessible distribution of those records So according to that toolkit, which was created in 2010 but is under redevelopment right this very minute – the plan is to roll out the revised version at the annual meeting in San Antonio in July – these are the overarching goals of a QA program. What is the end result we’re looking for? Accuracy and completio, and timely and accessible distribution are two that few could argue. By the way, I love to make up little stories about the people in clip art pictures. Here we see a young mother and her baby looking over the shoulder of their doctor and his EMR in horror (at least on the baby’s part) at the quality of his clinician created documentation. “What do you mean my gender reassignment is speech wreck’s fault?!” the baby is saying. Goals of QA
9
Quality Assurance in a Nutshell
Ensure a statistically valid sampling Apply auditing to all documents, however created Ensure consistent, unbiased quality review Error values should be consistent with definitions in the QA Best Practices tool kit These are some of the attributes of a good QA program. Quality Assurance in a Nutshell
10
Empowering the MTs/HDSs
Communicate consistently about errors Ensure MTs and HDSs have access to reference materials, lists of provider names, and account specifications These are some goals the QA program should have. Empowering the MTs/HDSs
11
Factors Affecting Quality
Document author (dictator, author of PowerNote or other self-created documentation) has power to affect our quality Ability to organize and articulate thoughts Background noise Rapid speech Heavy accents Poor articulation Low volume Audio quality None of us can argue with these factors – out of our control -- that affect our ability to do our work. Have you ever tried to edit the report of a dictator who was clear speaking but extremely slow and incredibly disorganized? I think I’d rather have a speech-impedimented rapid fire mumbler from Mumbai dictating instead of that slow, mispronouncing, disorganized wretch! Factors Affecting Quality
12
Factors Affecting Quality
Audio equipment Technical issues with the audio file for transcription Cell phone dictation HDS experience level Patient demographics Clear account specifications Samples of clinician reports for HDS/MT use Availability of spelling checkers and text expansion software Of course every doctor and nurse practitioner in the history of time has to sit down to dictate right next to the nurses’ station, where the nurses are having a huge loud party, underneath the loudspeaker that is shouting “code red code red.” And then the dictator quietly whispers her dictation so she doesn’t disturb the party. Or the fire. And cell phone dictation – now there’s the bane of my existence! But HDS experience level is an factor I tend to forget, even though I spent more than 20 years in the classroom. I’ve been fortunate to work with the most experienced and highly qualified staff of my lifetime in the past year or two at Florida Hospital, so I forget what a difficulty it is – yet an essential function – to work with new graduates. Their work needs close monitoring for at least their first couple of years, until they know what they don’t know. Regarding demographics, I remember 37 years ago when we didn’t have access to an automated ADT feed from the hospital admission, discharge, and transfer files. In fact, that ADT consisted of a single admission-discharge census that the assistant medical records director created each morning on her correcting Selectric. Guessing at patients’ names spellings was common back then. But today, if an author enters the patient’s identification number incorrectly, it’s a great frustration for us. For example, right now at my organization we have this psychiatry resident who thinks the medical record number is the same as the financial identification number (or FIN) we use to differentiate among a patient’s admissions in a given chart. She never ever identifies herself or the patient, she puts the wrong number in, and then you have to figure out who the patient is. Factors Affecting Quality
13
Your colleague in the healthcare documentation department tells you that she never leaves blanks because they are unprofessional. What do you tell her? A. “That is true; only newbies leave blanks.” B. “The integrity of a report is best protected when we do not leave blanks.” C. “A well-researched blank is honorable, and it is dangerous to guess or leave out a word without indicating its omission.” D. “Our department requires accurate-appearing reports, and leaving blanks makes the reports look inaccurate.” Here I really need to talk about the phenomenon of blanks. So here’s a quick quiz for you. I have heard all of these arguments in my 37 years in the MT business. How many for A? How many for B? C? D? Well, obviously the correct answer is C. BLANKS
14
“How far should we go in trying to verify a word
“How far should we go in trying to verify a word? One can spend hours researching a word, but in my book, that’s artsy-craftsy, not professional.” There are blanks and there are blanks. Vera Pyle was one of the founding mothers of AAMT, now AHDI, much like our own Jenola Bradwell was here in Florida. In the early days of this organization Vera could be found ubiquitously writing on style and medical information in the AAMT Newsletter, and she wrote a book that was one of the most indispensible tools we had before the internet, Current Medical Terminology, that silver book. We just called it “Vera:” Vera’s the one who said a well researched blank is honorable thing. I found this quote in the 2007 edition of her book this week. It explains the logic behind the honorable blank. The Honorable Blank
15
Blanks Are Necessary Valid Blanks Invalid Blanks
Unknown person or place Discrepancy in dictated details Clipped, cut off, or omitted dictation Inability to verify terminology Audio file distortion Or you Just. Can’t. Get. It. Invalid Blanks “One that perhaps could have been resolved by the MT had they employed better resolution practices.” This listing is from the QA Best Practices toolguide, but I take issue with it because I think it’s too limited. To me as a working QA there are several levels of blanks. Most of my work day is spent filling blanks in. (And I just LOVE this, it’s a blast!) I can’t imagine a situation where there isn’t a next level of colleagues where you can send a report for a “second pair of ears” or even a fifth or sixth. Even among our QAs, we frequently ask each other for a “relisten.” I think it’s a real badge of honor in our department when at the bottom of a report you’ll see three or four sets of initials separated by slashes--thus indicating how many people tried to make the report functional for patient care. I would much much rather have blanks than guesses in a report that comes to my hold queue, because I may miss those guesses. If there’s a blank, I’ll do my best to fill it, and if I can’t fill it and my colleagues in QA can’t fill it, then it’s definitely one of Vera’s honorable blanks. The invalid blank, though, is a little harder to define. I know we’ve all seen them in QA. As a QA Educator I remember a few of the MTs in the department who were hesitant to go with their gut -- I was sure they knew what the doc was saying but they sent in a blank anyway. Or a bunch of blanks. With someone like that, the problem is a lack of confidence, so you can’t just whip them and beat them and scream at them – you have to encourage, and build them up, and say “I bet you knew this!” Still, again, better a blank than a guess. Then earlier this week I had a report come in from one of our services with 27 blanks in it. What bugged me most about that was that it came from one of the service’s QAs, and most of the blanks were really easy for me to fill, so she should have been able to get at least 10 or 20 percent of them. But that in itself didn’t really bother me, though I thought it was remarkably lazy of her. The worst part of it was that each of those dictation anomalies –the “blanks” you insert in the old Dictaphone SR platform that take you right to the audio -- was cued to return to the beginning of the report. So as I filled a blank I would hit “control q” to go to the next spot in the audio and it would take me back to the beginning of the report. In an 8 minute dictation. Imagine the cursewords coming from me. This is a way to really P.O. your QA staff. So I’ve been told that there are organizations that actually lower their HDSs’ pay for leaving blanks. I can’t think of a worse practice. Now, if your QA program has noticed a consistent problem with an individual leaving too many blanks and it’s become a pattern, perhaps of laziness like that person I was talking about a second ago, perhaps in an attempt to build a line count, that’s one thing. But first I’d counsel before I auto-cut pay. I just think that penalizing MTs for leaving blanks is going to promulgate guesses in the report. And I don’t want guesses going out on my charts. Blanks Are Necessary
16
Resolving Blanks For clinicians: Dictate clearly to begin with (Ha!)
For clinicians: Refrain from using a cell phone (Good luck with that!) For clinicians: Read the report, fill in any blanks, dictate addendums, THEN sign (As if!) Some of these recommendations are so logical yet so laughable because of how frequently they’re ignored—especially the ones for the clinicians. Doctors aren’t going to dictate clearly (they always talk right through their yawns) and they ARE going to use their cell phones. But why is it that they first sign the report, then have their nurse practitioner dictate an addendum? I wish they knew what a huge project it is to do a “full blown secret squirrel” on a report. But there’s a tool in Cerner, our Florida Hospital EMR, that allows docs to do a mass signing of everything in their in box. THEN they read the reports. Forehead slap! That’s why I put Secret Squirrel here. He’s kind of an inside joke among the QAs at Florida Hospital, because our method for “unsigning” a report was named the secret squirrel by a computer programmer way back when we first got Cerner. Though, on the other hand, sometimes we underestimate our clinicians’ willingness to be our partners. Recently we had an issue with our new service understanding what one of our clinicians is doing. Our docs are used to us taking care of them and reading their minds, but once you’ve been outsourced, that can’t happen anymore. This doc says he’s dictating a consult for Dr. So and So, in other words Dr. So and So was the first choice and he fills in for her. Our supervisor Wendy texted the doc and asked him to just say at the beginning of the report, I am doing this report for Dr. So and So. Now he’s doing that, and now the service just types his dictated statement at the top of the report and all is well. Resolving Blanks
17
Resolving Blanks For facilities: Teach the dictators to dictate
For facilities and MTSOs: Make sure the MTs have sample reports For facilities and MTSOs: Give MTs feedback on blanks consistently and frequently For MT and QA staff: Review other documents or records to resolve discrepancies When I edit, I use those anomaly insertion points a lot. If I don’t understand something immediately, I just stick an anomaly in. I pretend that I’m sending my report to QA, and then I come back and QA it myself. But I don’t edit very often, and I’m not very fast to begin with so I’m not someone to offer advice on that job. On filling blanks, though, I’m good. My biggest go-tos are first to slow down the audio. Lots of times that’s enough. Then I go to dictator samples and the electronic medical record. We have a department website brimming with dictator samples, but I also make my own, in big long groups, all in one document, so I can quickly control-F to a common phrase I think I’m hearing. The other thing I lean on is our EMR, and I’m at the point now where I just don’t want to work without it. So if you’re working for a service and you don’t have access to the EMR, and you can’t understand what medication the dictator is saying or what the halfwit nurse practitioner is saying when she reads the radiology reports and pronounces all the words completely wrong, just send me the report and I’ll look it up. Quick and easy for me. And it keeps us from having errors in our patient records. Resolving Blanks
18
Personnel : Quality Players
Healthcare Documentation Specialists QA Editors / Supervisors / Managers Support Staff / IT Healthcare Providers Administration / HR Trainers / Instructors Educational Institutions Continuing with QA best practices – you gotta have good people. So these are all the people who are involved in a QA program. I like that they suggest including educational institutions, especially if your organization partners with a particular school, the way Seminole did with Florida Hospital back in the day. I was always eager for my advisory committee to give me feedback on my graduates’ work, what they needed to learn, how I should tweak the curriculum. Personnel : Quality Players
19
Policies and Procedures
Review with “original input” Concurrent Review Retrospective Review Flagged Documents Feedback Author Assessment So though I’ve talked extensively about how most of my time lately is spent filling in blanks, a quality assessment program goes beyond that. First, if you’re doing QA, you need to “proof with the tape,” as we used to say in the analog days. Meaning review the transcript with the “original input,” which is the voice file. Use a style guide, preferably the AHDI Book of Style (also currently under revision this year!) But supplement the BOS with a guide that is specific to your facility, or to the client specifications you’re working on. Adhering to a style guide is important because it helps take the subjectivity out of quality reviews. Say you have an issue with people starting sentences with a number and you want to ding your MTs when you see them do that. If that rule about sentences and numbers isn’t hard and fast in the BOS or in your own organization’s style guide, then you can’t ding the MT. Concurrent review is when you look at work before it goes on the chart. This is what you want to do with a new transcriptionist, either a new hire or a more inexperienced one. For some period of time you’d need to do 100% review of that person’s work. Then maybe you can limit the reviews to certain work types or certain dictators, once the MT gains more experience. Similarly, in an MTSO setting, I would hope that the MT starting on a new account would be put on concurrent review for a while until they’re up to speed. Retrospective review is when you review a report after it’s hit the chart already. The issue with that is if you find errors serious enough, you have to amend the report to correct it ... but if you’re finding mistakes, they sure would need fixing! In these cases you still must be able to review these reports while listening to the audio, so the trick will be to ensure that it’s still available. Some systems purge audio pretty quickly, within a month or less. Flagged documents are the ones I was talking about with the blanks. Now, I don’t send feedback on every blank I fill -- unless the MT asks me for feedback on it, in which case I do a full-blown review. Not grading it, either, not when someone is asking for feedback when they’re having trouble with a report! Our FH hospital MTs have the ability to look up their work on the chart to see how it was finally resolved, but that will end in a couple of weeks when our last employees leave us. (Sigh.) Flagged documents, in my opinion, shouldn’t be graded. Interestingly, the AHDI QA guidelines recommend reviewing the DICTATOR along with the MT when you do QA. I’ve never seen that done. But one thing you could do, if you had a way to tell, is check to see if there’s a consistent issue with technology the dictator’s using. Or maybe the dictator keeps dictating reports in fragments. Or uses his cell phone to dictate deep in the hospital basement next to the morgue with lead lined walls so he has less than one bar of signal. Or she doesn’t know what a patient number is. Then the department supervisor could make a quick phone call and see if they can resolve these issues. Policies and Procedures
20
Sampling Guidelines Math is HARD! Statistically Valid Sampling
Random Sample Size Try the 5/3 method 1% per month recommended This gives a 95% confidence level with a margin of error Well, there is math on this slide. Sort of. So the AHDI QA guidelines say a statistically valid sampling methodology has to be scalable -- that is, something you can use on one MT or on 1000 or more -- and it also has to accurately indicate the quality of the MT and the department or the MTSO. I mean, it would be great to audit every report ... NOT! That’s not going to happen -- it’s just too time consuming and cost prohibitive. The QA guidelines recommend a 1% sample size of an individual’s output. This allows for a 95% confidence level with a margin of error. If you want to see the math on this, go nuts -- go download the QA report! There’s even an actual formula on page 22. Me, I’m with Barbie. For the scores to be valid, samples have to be representative and selected at random. The QA guidelines recommend developing an algorithm, but again, I’m still with Barbie. One other idea they suggest is something called “the 5/3 method.” With this you pick documents that have some number ending in 5 or 3 until you get to your desired number -- obviously you’d have to reach a consensus on what number, like maybe an account number or a medical record number. But this is nicely random and easy to deal with. Sampling Guidelines
21
Error Categories Feedback Errors / Educational Opportunities
Critical Errors Affect patient safety, care, or treatment Noncritical Errors Affect document integrity Feedback Errors / Educational Opportunities Do not change meaning or affect patient care I’ll go into these more on individual slides to follow, but basically there are just 3 error categories. Personally I think there should be another category, i.e. “Errors that Make Us All Look Like Idiots,” but I was overruled on that one. Error Categories
22
Critical Errors (-3) Terminology Misuse Omissions/Insertions
Incorrect Patient Demographics or Author Identification D: This 92-year-old female did well postoperatively but went to the ICU because of cardiomyopathy and age. T: This 92-year-old female did well postoperatively but went to the ICU because of cardiomyopathy and AIDS. D: Patient takes 40 mg of Lasix. T: Patient takes 400 mg of Lasix. D: He had no episodes of unconsciousness en route. T: He had episodes of unconsciousness en route. [dropped “no”] Some examples of critical errors include the use of incorrect terminology, omission of dictated information, insertion of nondictated information, or incorrect patient. Critical Errors (-3)
23
Noncritical Errors (-1)
Misspelling Incorrect Verbiage Failure to Flag Protocol Failure Formatting/Account Specifications Incorrect use of soundalikes like elicit/illicit, dissent/descent, affect/effect, apprise/appraise D: Involvement with secondary infection T: Involvement. Impression of infection D: (in female exam), prostate exam performed. T: Transcribed as dictated. This should be flagged. D: Blood pressure 110/60. T: Blood pressure 11/60 Some examples of noncritical errors include incorrect verbiage, misspellings, protocol errors, and typographical errors. Noncritical Errors (-1)
24
Feedback Errors / Educational Opportunities
Grammar Punctuation Capitalization Plurals Run-on/fragment sentences Abbreviations Slang Inconsequential typos and omissions Incorrect word forms D: Ace bandage T: ACE bandage D: I saw the patient yesterday. T: I saw the patient the patient yesterday. T: gram negative rods Corrected: gram-negative rods The “educational opportunities” errors include grammar and punctuation, capitalization, run-on and fragment sentences, plural usage, abbreviations, typos (like “teh” for “the”), drug name capitalization, and incorrect word forms (like using the adjective form of femur (femoral) when the noun form was called for. Feedback Errors / Educational Opportunities
25
Scoring Error Value from 100 Method
If same error repeated, count only once Score of 98 is considered passing Deduct 3 points for a critical error Deduct 1 point for a noncritical error Deduce 0 points for an instructional error Any report with a critical error should fail Pass/Fail versus Scored Audit With this method you subtract error values from a per-document value of 100. Each error is subtracted from a total score of 100, which is assumed to be a perfect score. You don’t discount a report for length, like count less or more off for it, because all reports are equally important to patient care, right? There are methodologies where if a report has more than 100 lines in it, a critical error doesn’t fail it, but any report with a critical error should fail. The ultimate outcome is to ensure accurate records for patient care. Scoring
26
Qwik qwiz! Spot the error – Assign an Error Category!
This should be an interesting exercise – I’ve collected some transcription errors I’ve come across this week, and I’m going to put them up here and first see if you can spot the error, and second see how we all classify them. As much as we try to be objective, there are some definite judgment calls in assigning categories, so that’s what I’m interested in seeing here. Spot the error – Assign an Error Category!
27
T: Patient will return to clinic on as-needed basis
T: Patient will return to clinic on as-needed basis. She does continue on Percocet 5/325. It has been helpful for her hip and thigh pain, and thus, we have asked her to hold off on taking this. Spot the error!
28
T: Patient will return to clinic on as-needed basis
T: Patient will return to clinic on as-needed basis. She does continue on Percocet 5/325. It has been helpful for her hip and thigh pain, and thus, we have asked her to hold off on taking this. D: Patient will return to clinic on as-needed basis. She does continue on Percocet 5/325. It has NOT been helpful for her hip and thigh pain, and thus, we have asked her to hold off on taking this. See what happened? They left off the “not” there, and thus the sentence makes you go “huh?” Forgetting this one word probably wouldn’t kill anyone, but it would make it difficult for future readers of these records to understand why the clinician decided to stop the medication, since it supposedly had been helpful. On listening to this sentence I noticed the word “not” was dictated here — its presence made the sentence make sense. How many for critical? How many for non? How many for feedback? I would class this as a critical error. Classify the Error
29
Spot the error! Assessment: Gluteal/Inguinal hydradenitis -r/o abscess
PCN allergy Spot the error!
30
Trick question! This was a clinician-created report.
Assessment: Gluteal/Inguinal hydradenitis hidradenitis -r/o rule out abscess PCN Penicillin allergy If an MT spelled it hydradenitis I would have a hissy fit, but I don’t expect a doctor to be able to spell. After all, I don’t prescribe medicines; why should I expect a doc to be able to be good at my job? In fact I’m kind of glad that they aren’t. Fortunately, in the consultation that went along with this progress note, the MT spelled it correctly throughout as “hidradenitis suppurativa in the inguinal and gluteal region.” But pretending that this was an MT’s report and ignoring the bogus r/o and PCN, how would you classify the misspelling of hidradenitis? How many for critical? How many for noncritical? How about feedback? Classify the error
31
Patient has been subsequently taken to the OR to the drainage of the abscess, and followed by the resection and expiratory laparotomy. Spot the error!
32
T: Patient has been subsequently taken to the OR to the drainage of the abscess, and followed by the resection and expiratory laparotomy. D: Patient has been subsequently taken to the OR to the drainage of the abscess, and followed by the resection and exploratory laparotomy. So is this critical? Noncritical, worth a point off? How about just educational? Classify the error
33
PREOPERATIVE DIAGNOSES 1. Chronic sinusitis with nasal polyposis. 2
PREOPERATIVE DIAGNOSES 1. Chronic sinusitis with nasal polyposis. 2. Deviated septum. 3. Obstructive retinopathy secondary to #1, 2, and 3. Spot the error!
34
Classify the error Transcribed: PREOPERATIVE DIAGNOSES
Chronic sinusitis with nasal polyposis. Deviated septum. Obstructive retinopathy secondary to #1, 2, and 3. Dictated: Obstructive rhinopathy secondary to the above. Critical? Noncritical? Educational? This actually came in with the job note: “Discrepancy, dictated secondary to numbers 1, 2, and 3, but only 2 previous diagnoses dictated.” I couldn’t find another pre- or postop diagnosis in the chart, so I simply changed the “secondary to #1, 2, and 3” to “secondary to the above.” Obviously we would want you to continue to flag issues like this, not edit this way yourselves. But what was upsetting was what was missed – I‘m sure it was speech wreck seduction causing this error, but still, bleah! It‘s a nose procedure – and what on earth is obstructive retinopathy? Is that when your retinas get so swollen they block your nasal passages? Classify the error
35
Sinus bradycardia on EKG on metoprolol, heart rate of 155 beats per minute.
Spot the error!
36
Transcribed: Sinus bradycardia on EKG on metoprolol, heart rate of 155 beats per minute. Dictated: Sinus bradycardia on EKG on metoprolol, heart rate of 55 beats per minute. So what do you think? Critical? Noncritical? Feedback? This one was actually pretty obvious if you stopped and looked at it while you were editing – it should have at least been flagged, if you were really hearing “155,” because he said BRADYcardia. and… Classify the error
37
BRADY is Slow!!
38
STANDARDIZATION
39
The obvious reason: Speech recognition engine training
The speech wreck engine gets screwed up if you deviate too far from a single standard This is one reason that editing (i.e. changing wording) is largely discouraged now But I still do it. If there’s a phrase that’s completely out of order, or if the doctor just flat used the wrong word and I know what he meant to say, I edit. But I have the advantage of working for a single institution, or actually a health system – I know our organization’s standards and principles, I know what the doctors are saying, and I also know that if the doctor doesn’t WANT that change made, he or she can fix it or call the office, and I won’t get beaten for using my professional judgment. Why standardize?
40
Standardized formats are important in the electronic medical record because the work type standard format is how the EMR knows where to put the document. For example, in Florida Hospital’s EMR, a report formatted as a 98, or letter, goes into a folder outside the patient’s legal medical record. Standard formats
41
Using standards to create “science-quality” data
There’s a push to use electronic medical records for secondary purposes, like data mining. It’s easy to mine data from a pick list, but text isn’t as simple. This is a reason that AHDI and its volunteers originated and have been active in the Health Story project with HIMSS—to make it easier to mine information from transcribed records. Using standards to create “science-quality” data
42
Using standards to mine medical text for data
But even so, with widely varying language, this isn’t so easy. “The data arrangement and retrieval of such text parts become difficult because they are often described in a free format; the words, phrases, and expressions are too subjective and reflect each writer.” (Kushima, 2012) Using standards to mine medical text for data
43
Graph used to map data points prior to creating an algorithm
This research project used text mining techniques to extract information from nursing records in EMRs. They did this by determining “feature vocabularies” seen in past chronic hepatitis patient records at a Japanese university hospital, then extracted to determine vocabularies relating to proper treatment methods. The point as far as we are concerned is how widely variable records can be. Some degree of standardization, even as simple as that provided for our the Book of Style, could help make medical research a little more reliable and a little easier to do. (Kushima, 2012)
44
Clinician-created documentation auditing
AHDI has a pretty robust toolkit for starting one of these clinician created documentation auditing programs on the website, in the same basic area as the QA Best Practices toolkit. Look for this title: QA Program for Clinician Created Documentation. There’s a powerpoint, a list of errors and categories, sample policies and procedures, justifications, etc. A New Frontier for People Like Us
45
PLEASE NOTE: voice recognition software used in this document
PLEASE NOTE: voice recognition software used in this document. There may be multiple grammatical and punctuation errors, improper use of tenses, misspellings and gender reassignment. I have a statement like this that pastes in automatically on s I send from my iPhone, but that’s , not patient records. Ever seen this?
46
Actual clinician-created progress note
Mr. Smith Bob is a 63-year-old male but cirrhosis hepatocellular carcinoma secondary to alcohol and hepatitis C his last CT scan on 45 showed a 3.4 cm lesion in segment 5 of his liver. A year ago Z of 2 stable at 7.6 patient also has COPD emphysema prior history of TB that was treated. He denies any encephalopathy did have treatments for his TB with the diamond in which she developed upper bleeding. This lesion has grown slightly in the follow up he was admitted on April 2016 Tampa and stopped. At that time the lesion measured 3.4 cm. Are you looking at this going “what?” This report didn’t have that “hey, it’s just speech wreck, so deal with it, dude” note on it. I don’t know how this patient cut his liver with someone’s diamond. Or maybe he got his TB fixed. Actual clinician-created progress note
47
As the AHDI task force has been revising the QA Best Practices documents this year, we’ve also been considering combining that with the information and recommendations for clinician-created documentation auditing. We even thought about holding both clinicians and MTs to the same standards – which obviously would be significantly lowered to accommodate the clinicians. Most doctors can’t spell, and most can’t type well, and most can't document nearly as well as they can when they have our help. Like Dr. McCoy says. And I’m not a doctor, so I totally get it.
48
“Praise and credit rise to the highest person on the totem pole, criticism and blame fall on the lowest.” —Vera Pyle I came down, along with several others, strongly on the side of holding MTs to a higher standard than we do the doctors. It’s not fair, no, but we do this for a living – documentation is our profession. And secondly, what Vera said. I’m not sure what the final outcome will be, but I should hear more in the next few weeks. As you will see, though, their critical errors are similar to ours.
49
The Need for Quality Assurance
Common EHR Practices that Create Vulnerabilities Copy and paste or “note bloat” Lack of review, correction, and feedback 3. Unmanaged/inconsistent template creation and modification leading to automation errors 4. System(s) designed and built with limited healthcare documentation expertise These next two are slides I pulled in from that presentation. #3 can also include the use of macros, expanders, or different automation techniques. A QA program can effectively address each of these issues to ensure quality of care and continuity of care, and to decrease physician and clinician frustration while streamlining and supporting the documentation process. Reference “AHIMA Copy and Paste Position Statement” link in Resources slide at the end of the presentation The Need for Quality Assurance
50
The Need for Quality Assurance
Additional vulnerabilities: Inappropriate abbreviations Inappropriate templates Wrong patient/wrong visit Selecting incorrect check boxes Speech “wrecks” Don’ts Not using standard abbreviations Use of the wrong visit type, or wrong dropdown in the EHR forms Cut and Paste as a shortcut, leading to mistakes and misinformation Speech “Wrecks”: systems that misinterpret what the originator is saying or overlooked front-end speech recognition misrecognitions Insertion of inconsistent or lengthy progress notes or pre-completed notes Dos Support the standard of care Avoid inaccurate, outdated, or redundant information Avoid propagation of false information or typos Verify patient demographics information, medications, dosages, etc. The Need for Quality Assurance
51
Critical Errors for Clinicians
Wrong medication/wrong dosage Examples: 15/50; mg/mcg; sound-alike drugs (such as sildenafil, vardenafil and tadalafil) Wrong lab value outside the normal range Wrong patient/wrong content (demographic errors) Examples: patient name/gender/age/race discrepancies in report Joint Commission—Unapproved abbreviations Examples: cc - mL; U - unit; IU - international units; SC or SQ - subcutaneous; R or Rt - right; L or Lt - left; AS, AD, AU - left, right or both ears; OS, OD, OU - left, right or both eyes; MS, MSO4, MgSO4 - morphine sulfate or magnesium sulfate; QD or QOD - daily or every other day; trailing zero or lack of leading zero Note that these are suggestions, not mandates. Critical Errors for Clinicians
52
Critical Errors for Clinicians
Medical word misuse Examples: hypo/hyper; negative/positive; regular/irregular; no/known Incomplete or missing data Examples: Neurologic: 2+; Extremities show 2 to 3 over 4; X-ray shows pathologic fracture, no acute… Incorrect side/site Examples: Right/left; humerus/femur; peroneal/perineal Incorrect template/work type Examples: vaginal vs. laparoscopic hysterectomy; tonsillectomy vs. adenotonsillectomy; H&P vs. Discharge Summary Incorrect carbon copy distributions attributed to physician selection Example: Incorrect physician added to cc list by the originator Inconsistencies Examples: HPI: Patient has weakness Musculoskeletal: Normal Strength Critical Errors for Clinicians
57
AHRQ Study
58
So Linda Brady saw this AHRQ webinar come through her news feed and she signed up for it, and lo and behold, this nice doctor had done an AHRQ-funded study on EHR documentation quality. Right up our alley, Linda thought, so she registered.
59
and guess what? She actually looked at transcribed record accuracy and compared it to front-end speech wreck documentation. AND GUESS what else? We won! So Linda and Dr. Zhou talked, and Dr. Zhou wanted AHDI’s input for future research. We suggested she focus on clinician created documentation auditing programs and their success in improving documentation quality.
64
the nonsense record
65
Reed Gelzer, MD, came to Linda Brady and the AHDI board and asked for help in a project he calls defining the nonsense record. The question: “At what point does a record become completely useless?” In the nonsense record, we elaborate on the problem of impossibilities. The Nonsense Record
66
In the paper days, it was harder to put something patently ridiculous on a chart.
Humans assign higher authority to text on a screen than to handwriting on a page. The Nonsense Record
67
I thought this approached nonsense but it may not, because what we’re trying to figure out is when the entire record’s function is thrown into doubt. This is a scribe-created ER note; the first part of the report is straight typing, and starting with “the onset was,” the text is generated from a pick list. I find the risk factors here to be really confusing. Approaching Nonsense
68
Pretty Sure This Is Nonsense
PROGRESS NOTE Pt off floor for MRI at time of rounds today. PHYSICAL EXAMINATION GENERAL APPEARANCE: On examination today the patient is well developed and well nourished. HEAD, EYES, EARS, NOSE, AND THROAT: NCAT. PERRLA. EOMI. Sclera white, conjunctiva are pink. Pharynx clear. NECK: Supple with no masses. LUNGS: Clear to auscultation, both anteriorly and posteriorly. HEART: Regular. ABDOMEN: Soft, nontender. NEUROLOGICAL: Intact. Pretty Sure This Is Nonsense
69
Thank you! Questions?
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.