Many clinical labs now are incorporating a continual quality improvement philosophy into their quality management systems, as part of an important evolution away from a reactive problem-solving approach. Ours is no exception, and we hope our experience in analyzing a multiyear initiative will inform and inspire others embarking on this new approach.*

The lab medicine community knows well the concepts, standardized processes, and guiding documents like ISO 15189 and CLSI-QMS01 involved in assessing quality. Labs implement quality measures to produce accurate and precise results and to identify problems during the analytical phase of testing. We fulfill our quality assessment and assurance requirements by establishing quality control (QC) metrics and enrolling in proficiency testing (PT) programs. However, simply monitoring QC doesn’t necessarily lead to improvements, especially if labs set quality thresholds simply to meet regulatory requirements.

In contrast, effectively utilizing quality metrics may expose other weaknesses in a laboratory process. When assessing and looking for potential quality improvements for a particular assay, asking three key questions may help bring process limitations to the surface: Do I have the necessary rules in place to achieve sufficient analytical quality assurance for this assay? How much effort does it take to maintain the desired level of test result accuracy and quality? Can we maintain this level if our testing volume increases or we experience a significant change in staffing?

In some scenarios, reducing testing process inefficiencies necessitates improving instrumentation reliability by switching to newer, more reliable platforms and undergoing method improvements aimed at better precision and throughput. Standardizing troubleshooting practices also may reduce run-to-result time variability, which helps improve process consistency. In order to maintain the necessary analytical quality assurance, gurus at Westgard QC advise tailoring an appropriate set of QC multirules. These subsequently may guide troubleshooting practices to reach resolutions faster and independently of users’ expertise. Once the finalized plan for improvements is in place, committing to change requires a lot of effort and trust that the change is beneficial to a lab.

Case Study Lookback

Our trace and toxic element testing lab, like other labs across the country, is seeing downstream benefits from adopting a proactive strategy that attempts to anticipate future problems associated with lab performance, PT, and accuracy of test results. In our case this involved not only implementing a QC initiative but also going back afterward to assess how the changes we made affected our lab and whether they were successful.

We carried out a lab quality assessment and focused our efforts on a multi-element urine assay that had higher than desired variability. Routine QC analysis showed the assay’s average imprecision coefficient of variation (CV) was 7.5% and exceeded 15% for outlier analytes. Our root cause analysis suggested we needed to take a closer look at the total testing process with a detailed process map.

This proved to be illuminating as we documented long and variable run times of 7±6 minutes per sample. We attributed this in part to older and less reliable instrumentation that used inductively-coupled plasma mass spectrometry (ICP-MS) analysis. Troubleshooting for this ICP-MS process often was cumbersome because staff lacked standardized instructions on specific trouble-shooting steps and relied on their individual experience to look for solutions to assay problems.

Our improvement efforts included implementing updated instrumentation using the same ICP-MS methodology, redeveloping our analytical method, instigating a robust QC strategy, and installing software that integrated best practices for QC monitoring and troubleshooting. We sought to improve analytical performance, efficiency, and throughput.

Within 2 months we carried out a post-implementation assessment of this initiative as part of our continual improvement efforts and found our changes had yielded substantial productivity gains. In particular, thanks to an approximate 75% reduction in troubleshooting and markedly shorter analytical run times of 5±1.8 minutes per sample, our capacity rose and we were able to efficiently handle greater testing volume without increasing the lab’s footprint.

The drop in troubleshooting time—especially by advanced staff trained in this process—reduced our labor costs for this assay by an estimated 43%. The new method also demonstrated reduced imprecision and bias with an average CV of 4.4%, and <12% for most outlier analytes.

As often occurs with operational changes, new bottlenecks emerge once old ones clear. In this case, during the early stages of implementing our new method we identified an area for improvement—double verification processes, arising from increased in-lab turnaround time due to a delay in secondary verification. We mitigated this delay in the later stage of implementing our new method once staff grew more accustomed to when runs became ready for secondary verification.

From our process improvement and associated look back we learned that both offered valuable lessons about new limitations and future areas of focus. As difficult as it may be, maintaining momentum after successful quality improvement initiatives and keeping a lab focused on using its newly gained experience to solve emerging shortcomings is crucial. We also learned that long-term management support is an essential element as labs embrace a philosophy of constantly striving for better clinical testing. Multiyear quality improvement efforts can be a big burden and won’t be effective without lab staff’s acceptance of the new quality vision.

Embracing such ideas takes time, but a robust follow-up study such as ours may demonstrate the value of such effort and philosophy. It may provide confidence for labs that larger-scale implementation improvement initiatives will be worth the time and effort, perhaps even transforming staff attitudes from “we have no control over it” to “we can work to make it better.”

Valentinas Gruzdys, PhD, is a clinical chemistry fellow at the University of Utah in Salt Lake City. +Email: valentinas.gruzdys@aruplab.com.

Frederick Strathmann, PhD, DABCC (CC, TC), is vice president of quality assurance, director of new technology and innovation, and assistant laboratory director at NMS Labs in Willow Grove, Pennsylvania. +Email: frederick.strathmann@nmslabs.com.

*At the time referenced, Dr. Strathmann was employed at ARUP Laboratories.