Listen to the JALMTalk Podcast
Amy B. Karger. To Delta Check or Not to Delta Check? That Is the Question. J Appl Lab Med 2017;1:457-459.
Dr. Amy Karger is an assistant professor of Laboratory Medicine and Pathology at the University of Minnesota.
Hello, and welcome to this edition of “JALM Talk” from The Journal of Applied Laboratory Medicine, a publication of the American Association for Clinical Chemistry. I’m your host, Randye Kaye. The subject of today’s podcast is delta checks. Delta checks were historically implemented to detect analytical errors and improve quality control in the clinical laboratory. This is accomplished by enforcing manual technologist review of samples that exceed established limits for a specific analyte. In the January 2017 issue of JALM, a Technical Tips article entitled “To delta check or not to delta check, that is the question” discusses recently published literature including a CAP Q-Probe study and the CLSI Consensus Guideline that question the utility of delta checks. Further, this article gives laboratories tips on how to evaluate the validity of their current delta checks and execute changes if they are found to be ineffective.
The author of this article is Dr. Amy Karger, an assistant professor of Laboratory Medicine and Pathology at the University of Minnesota. Dr. Karger serves as medical director of the Westbank Laboratory, which primarily services the University of Minnesota Masonic Children’s Hospital in Minneapolis, and she’s our guest for today’s podcast. Welcome Dr. Karger.
Your article tackles the subject of delta checks which have been a long standing quality control practice for clinical laboratories since the 1970s. Why has this re-emerged as a hot topic for discussion today?
So you’re absolutely correct. Delta checks were first described a little over 40 years ago in 1974, as a means to flag and identify lab error. They are traditionally defined as the difference between a current lab result and a previous result that occurs within a set time period. Delta checks could see the established limit for an analyte are held for manual review to determine whether the significant change in a lab result is due to an error.
Additionally, they can be used to flag clinically significant changes in a patient’s status with alerts to clinicians implemented. With the vast improvements in laboratory automation, barcoded labeling, and instrument technology since the 1970s, the frequency of lab errors has decreased significantly. Therefore, the recent literature on the topic of delta check has questioned their efficacy in detecting true laboratory errors. There are several publications over the last decade which demonstrate that delta checks now have very high false positive rates due to the very low error rates we are typically seeing in clinical labs today.
An alternative approach that has been touted is the use of a multivariate delta check, which can decrease the false positive rate from what is seen with the univariate approach. However, the multivariate approaches that have been published are often statistically complex or they require programming of complex algorithms into the laboratory information system, and therefore may not be practical for a routine clinical lab to implement.
I see, so if the delta checks are not as useful as they once were, do you think laboratories should still be using them?
So that’s a great question. And given the recent literature, it’s important for laboratories to review the utility of their own current delta check practices, as they may in fact have delta checks in place that are ineffective. Currently, there are no regulatory requirements for clinical labs to routinely review their delta check parameters and therefore, delta check practices may be something that a lab established in the distant past that isn’t subject to scrutiny or review on a regular basis.
The good news is that the Clinical and Laboratory Standards Institute, or CLSI, recently published the first consensus guidelines on delta checks, guideline EP33, entitled “Use of Delta Checks in the Medical Laboratory” and this was just published in March of 2016.
This is the first document of its kind to outline a standard approach for selecting and implementing delta checks, and it also provides some guidance on how to monitor the effectiveness of delta checks. My recommendation is that lab director start by reading through these guidelines as they provide a helpful framework for examining delta check utility.
They do sound really helpful. Do the guidelines give a set list of delta checks that they recommend for clinical labs to use?
So unfortunately, they don’t give a standard list and the guidelines instead emphasize that each clinical laboratory is unique in terms of its instrumentation, its patient population, problems of specimen errors, and therefore they recommend that each lab customize its delta checks to meet its own needs.
That being said, they acknowledge that certain analytes are better candidates for delta checks if they have high individuality. In other words, if an individual’s results change minimally over time when compared with a population-based reference interval.
The classic example of this is mean corpuscular volume or MCV, which remains relatively constant in a healthy person. The CLSI guidelines include a table of analytes and their respective indices of individuality which can provide a good starting point for identifying the best candidates for delta check.
It does sound kind of time consuming. Reevaluating delta checks, it could potentially be very time consuming for clinical labs. So, what are the potential benefits to doing this?
I completely agree that it’s a large task. We’ve in fact undertaken this in our lab that I direct, and it is a very slow but steady process. However, we have already seen a positive impact from reviewing our own delta checks, in terms of improving work flow and efficiency. During our delta check review, we quickly identified a subset of delta checks that were outdated and not effectively identifying laboratory error. These delta checks were impeding our lab’s efficiency by halting the auto verification of laboratory results.
Additionally, during our review process, we determined that due to the large number of delta checks flagged in our laboratory information system, our lab techs had gotten into the habit of ignoring certain delta check flags or alerts, because they simply did not have the time to assess each one individually. This alerted us to the fact that we needed to refine and simplify our delta check process, and ensure that the alerts were meaningful and were acted upon by the lab technician each time they appeared.
That sounds like a great idea. So, once you finalize a list of appropriate delta checks for your clinical lab, how do you decide whether they’re working effectively to identify lab error?
So, it turns out the CLSI guidelines discuss this very issue in a section entitled “Evaluating the Performance of Delta Checking After Implementation” and I highly recommend that clinical labs focus on this section in particular. They specifically recommend monitoring delta checks after implementation by tracking the numbers of true positive, false positive, and false negative delta checks. However, a key challenge with doing so is the fact that false negatives, which are defined as true errors that are not flagged by the delta check, are generally not identified by the laboratory, unless there’s a savvy clinician who recognizes an error and contacts the laboratory.
Despite the likely underreporting of false negative, labs are still encouraged to track these numbers to assess the effectiveness of their delta checks. Based on the numbers, lab director may then decide to change the parameters of the delta check to improve sensitivity or specificity, or the director may decide to eliminate a delta check altogether, if it is deemed ineffective.
The greatest challenge of this evaluation process, however, is the fact that there is no consensus on what defines an effective delta check. It’s really up to the discretion of the lab director to decide whether the true errors detected by the delta check are worth the extra time and effort taken to investigate the false positive delta checks that are flagged.
Okay, thank you very much. Is there anything you want to add? This pretty much covers it I think.
Yeah, I think so. Again, I think I would just encourage labs to review their delta check processes and hopefully that will help them become more efficient in their practices.
That’s terrific. Thank you so much for joining us today.
That was Dr. Amy Karger from the University of Minnesota talking about the JALM Technical Tips article “To delta check or not to delta check, that is the question” for this podcast. Thanks for tuning in for “JALM Talk.” See you next time, and don’t forget to submit something for us to talk about.