Laboratory errors lead to unnecessary follow-up testing and delayed or incorrect diagnoses, adding millions of dollars annually to healthcare budgets. The sole purpose of clinical laboratory quality control (QC) is to protect patients from unacceptable risk of harm from incorrect or medically unreliable results. If a laboratory’s QC processes cannot detect unacceptable risk, then all the time, money, and effort put into them is wasted. Unfortunately, many laboratories may be performing QC that is not adding value.

Assessing a Medically Unreliable Result

What constitutes a medically unreliable result (MUR), and how much risk is unacceptable? An incorrect result is “a result that does not meet the requirements for its intended medical use; in the case of quantitative test procedures, a result with a failure of measurement that exceeds a limit based on medical utility” (1). This echoes advice from the Stockholm (2) and Milan (3) conferences that recommend the use of clinical limits as the best choice for allowable total error limits (TEa).

Monthly QC review should ensure that method performance is stable and that the number of MURs per year is acceptable. Daily QC should be able to detect a change in method accuracy or precision that would cause too many MURs to be reported. Of course, it is imperative that QC samples be commutable and mirror patient samples.

In a recent informal LinkedIn poll, I asked, “If a laboratory method fails, when should you see a QC reject flag (with 100 patients in each run)?” Forty-five percent chose either “when sigma drops to 1.65” or “when the error rate reaches 5%,” which means the same thing. The rest of the votes were split evenly between “when sigma drops to 3.0” (equivalent to 0.1 MUR/failure) and “if one error is reported.” While there are a plethora of references on TEa limits in the literature, there is currently not a consensus standard for the maximum allowable number of lab errors.

The Clinical and Laboratory Standards Institute (CLSI) EP23-A document (1) states that “at the least, the ability of the QC procedures to detect medically allowable error should be evaluated.” Through various studies on QC effectiveness (4,5,6) I have repeatedly found that approximately 30% of QC processes would never detect a simulated shift in the mean that caused 5% of patient results to fail the TEa limit. While I prefer QC processes that allow only one MUR per failure event, a 5% acceptable failure rate does represent popular practice.

Why QC Fails

I further explored why QC fails recently with Sanford Moos, QC auditor at Sherman Abrams Labs in Brooklyn, New York, who kindly sent me QC data to evaluate from three calcium QC samples. For the low level, because the chart mean was set at 1 standard deviation (SD) above the measured mean, the intended 1-5s rule that was recommended by QC software would reject a QC result if it was more than 6 SD above, or 4 SD below, the measured mean. With a negative bias and a sigma value of 7.0, the mean could shift -4 SD before the new sigma value would reach the defined acceptable risk level of 3.0 sigma. A shift of -4 SD would place the mean at exactly the assigned QC rule limit of -5 SD. Fifty percent of QC results would fall above the 1-5s line, and 50% would send reject flags; it would probably take two QC runs to detect failure of this method.

Level 2 contained a clerical error and could not be evaluated. Level 3 had a sigma value of only 3.2. A shift of +0.2 SD would make this method fail acceptable risk criteria. The recommended 1-5s rule would never detect this shift.

After further discussion, Mr. Moos and I agreed on several reasons why QC could fail in laboratories:

  • Focus on the cost, rather than the value, of quality control.
  • Not enough evaluation of the ability of QC procedures to detect medically allowable error. Daily QC will effectively detect a significant shift from current performance only if mean and SD values on the QC chart represent current performance, and test frequency and QC rules or statistical limits have been verified to send a QC reject signal before an unacceptable number of MURs are reported.
  • Lack of testing QC samples at clinically meaningful levels.
  • Assumption that good proficiency testing (PT) equals acceptable quality, when in fact it would take a 20% error rate to trigger a failure of one sample in five, and a failure state could exist for months before PT would detect it.
  • Setting of arbitrary or statistical, rather than clinical, limits for the acceptable number of MURs per year and per failure event.
  • Failure to perform routine risk evaluation comparing the estimated risk to the acceptable risk criteria (e.g., the number and cost of error).
  • Insufficient education on the basics of QC.
  • Differences in QC practices and acceptable standards between labs.
  • Belief that QC is merely a perfunctory task that does not affect patient care.
  • Government allows/encourages labs to run only once a day.

In addition, laboratory accreditation inspectors do not necessarily enforce recommendations from CLSI, CLIA, or the Internal Organization for Standardization for QC.

What can labs do about this? First, laboratorians must decide whether they are willing to keep reporting bad results or if they are willing to change. It is important that laboratorians understand that better QC saves money and improves patient care. The good news is that software now exists to evaluate QC and quantify savings.

References

1. Clinical and Laboratory Standards Institute (CLSI). Laboratory quality control based on risk management, approved guideline. CLSI Document EP23-A. Wayne, Pa.: CLSI 2011.
2. Kallner A, et al. The 1999 Stockholm Consensus Conference on quality specifications in laboratory medicine. Scand J Clin Lab Invest 1999; 59: 475.
3. Brooks Z. Benchmarking laboratory quality. ResearchGate 2001. 
4. Brooks Z. The business case for optimized quality control practice. ResearchGate 2012. 
5. Brooks Z. Evaluation of quality OptimiZer software to simplify application of CLSI EP 23-A, minimize patient risk, and reduce clinical cost. ResearchGate 2015.

Zoe Brooks, ART, is CEO, cofounder, and director of research and innovation at AWEsome Numbers, Inc. +Email: [email protected]