Our laboratory staff consists of highly educated and experienced lab techs. When errors occur, we consistently re-educate the staff, but it does not seem to change our error rates. Is our approach to error reduction in need of quality improvement?
Corinne R. Fantz, PhD
Patient Safety Focus Editorial Board Member
All laboratories face the challenge of developing sustainable, strong interventions that prevent or dramatically reduce human error which may be harmful to patients. Education is the most commonly used intervention, but unfortunately it does not prevent the same errors from being made again. Most laboratory managers choose education because it is something everybody understands and is relatively easy and quick.
Relying too much on education isn’t the only way lab managers fail to attain the full benefit from any quality improvement efforts. Other common mistakes also can thwart efforts to reduce errors. However, by recognizing and confronting them, lab leadership can avoid these issues.
Documentation Rather than Interventions
Today’s hospital-based incident reporting systems make it easy to document errors, but at the same time they make it painless, too. There is a tendency to consider the case closed once the documentation is filled out. In comparison to the time spent documenting errors, managers tend to devote much less effort to error analysis and corrective interventions because these tasks can be difficult, time-consuming, and risky. Yet, spending less time on documentation and more time on strong approaches to error reduction typically results in more substantive quality improvements (1).
The Vigilance Problem
According to Roger Resar, MD, a senior fellow at the Institute of Healthcare Improvement, laboratories that fail to achieve and maintain low error rates share common flaws (See Box) (2). One such flaw is relying on processes and improvement methods that depend too much on vigilance and hard work by the laboratory staff. Have you ever heard a colleague or supervisor say, "Well, if they would just pay attention," or "Geeze, it was written in the procedure,"? The implied message is that errors would not occur if staff just worked harder and followed procedures. It is true that vigilant, active monitoring of processes can help achieve higher levels of reliability. But if a process is flawed, vigilance and effort will not be enough to avoid errors.
Six Common Flaws in Laboratories
with Weak Quality Improvement
- Educating staff on tasks that they already know.
- Focusing on low-priority aspects of quality improvement.
- Relying too much on hardwork and vigilance.
- Benchmarking to mediocre outcomes.
- Failing to link actions to poor patient outcomes.
- Having a permissive attitude toward clinical autonomy.
Source: reference 2.
Benchmarking and Mediocrity
Laboratories that have difficulty achieving low error rates also tend to employ mediocre benchmarking. This practice gives laboratory directors and supervisors a false sense of how reliable the laboratory’s processes truly are. Unfortunately, laboratories often benchmark to other laboratories’ mediocre outcomes that result from similar unreliable processes. This is the so-called cream-of-the-crud benchmark. For instance, laboratory managers and staff may be comforted to learn they have a similar mislabeling rate as their peers. But by relying on data that does not reflect a high quality operation, they may mistakenly assume that their lab’s process is fine. In reality, benchmarking in the absence of best-in-class benchmarks does nothing to ensure quality.
Linking Adverse Actions to Adverse Outcomes
Laboratory managers often have difficulty linking a particular process failure to a particular adverse patient outcome. For example, patient harm caused by a misidentification error on a chemistry panel specimen is rare. In fact, most laboratory managers have never been involved with or heard of such cases. As a result, there may be a sense of complacency regarding process improvement for mislabeled specimens in the core lab.
In contrast, consider the process of correctly identifying a blood bank specimen. In blood banks, the process of mislabeling a sample is more closely tied to causing patient harm because transfusion mismatches can be fatal and frequently publicized. This is one reason that hospital blood banks have invested a considerable amount of effort in strong, reliable sample identification processes.
An Illustrative Case
A technologist mistakenly reported an human chorionic gonadotroptin (hCG) result as negative instead of positive, causing a pregnant patient to undergo a procedure that would have been avoided if her pregnancy was known. After an investigation, the laboratory manager discovers that the technologist made a clerical error and selected the wrong shortcut code, 01 for negative instead of 02 for positive, when releasing the qualitative result.
Mistakes to Avoid
- Focusing on re-educating the technologist on the difference between 01 (negative) and 02 (positive).
- Emphasizing to the technologist that she needs to be more careful.
- Documenting that the technologist was re-educated and told to be more vigilant, filling out the incident report paperwork, and closing the case.
A Better Approach
After ruling out impairment or incompetence, the laboratory manager should console the technologist because it was not her intention to cause harm. Having ruled out the technologist as the source of the error, the manager then investigates the process for reporting hCG results.
In this particular case, the first step is to prevent the clerical error. An example of a strong intervention would be redundant data entry (3). One approach would be to have two technologists independently verify and enter a qualitative result. The result would be released to the electronic medical record (EMR) only when the two results match in the lab information system (LIS). The second intervention would be to have the LIS produce a warning if the results do not match, as well as prevent the result from being released to the EMR. With this added step, the technologist now has an opportunity to investigate and correct the error before the result is released to the EMR.
These interventions use redundancy to decrease the opportunity for clerical errors and create a process to warn the technologist to correct the error should a failure occur. Such steps are much more effective at improving the overall safety of manual result reporting.
Issues of Autonomy
Giving practitioners too much clinical autonomy regarding laboratory processes is another barrier that keeps labs from achieving low error rates. For instance, some labs allow clinicians to dictate when they prefer to be called for critical values. A permissive attitude decreases the likelihood of standardization and forces the organization to maintain an unnecessarily complex and inefficient infrastructure.
Here's an example of how this might play out in practice. On Mondays when Dr. Smith is in the hospital, she wants to be called if her patients' potassium levels are <3 mmol/L. But on Tuesdays and Fridays, Dr. Jones is covering and he likes to be alerted when potassium levels fall below 2.5 mmol/L. However, on Wednesdays and Thursdays, Dr. Johnson wants to be called for anything outside the normal range. With this complicated set of rules, how would laboratory management train new staff so that each clinician’s request is handled correctly and errors are not made? A critical value notification policy and process like this causes confusion for the lab staff, frustration for providers, and could comprise patient safety when the laboratory gets it wrong.
Getting it Right
Avoiding these common quality improvement mistakes takes a little practice, but it can lead to sustained error reduction rates in the laboratory.
- Astion ML. Why telling staff to "be more careful" doesn't work: patient safety interventions. Clinical Laboratory News 2009;35(7):15–16. Available online.
- Resar R. Making Noncatastrophic Health Care Processes Reliable: Learning to Walk before Running in Creating High-Reliability Organizations. Health Research and Educational Trust. 2006. DOI: 10.1111/j.1475–6773.2006.0057.x
- Kawado M, Hinotsu S, Matsuyama Y, et al. A comparison of error detection rates between reading along method and the double data entry method. Control Clin Trials 2003;24:560–9.