quality badge

There is perhaps nothing more essential to laboratory medicine than quality. No matter how cost-efficient, how rapid the turnaround time, or how advanced the method, reporting an incorrect result can be even worse for patient care than reporting no result at all. Certainly, laboratorians' obsession with quality is amply evident at this year's AACC Annual Meeting, with a wide range of roundtables, symposia, and short courses that deal with quality from various perspectives.

How labs should go about defining, measuring, and ensuring quality will be a dynamic discussion here in Houston. Experts will tackle everything from proficiency testing (PT) to informatics and automation. One of several common themes: don't take quality for granted. For example, at this morning's short course titled, "How Do I Know If Laboratory Quality Measurements Are Telling Me Anything?" speakers will tackle an issue that may surprise laboratorians—some quality measures simply don't work as intended.

"I think we need to continually push the boundaries of quality and set the bar high by re-evaluating our metrics," said Frederick Strathmann, PhD, one of the speakers at the short course. "Just about everybody has inherited quality metrics from a laboratory director before them. At the time, they may have been cutting edge, but sometimes because there is more data available or new processes are in place, they're just not adequate any longer." Strathmann is a medical director in toxicology at ARUP Laboratories and an assistant professor of pathology at the University of Utah School of Medicine in Salt Lake City.

Rethinking Quality Metrics

As automation and advanced information technology systems have proliferated in labs, quality metrics have evolved to include not only the quintessential calibrators and other attention to hands-on instrument performance, but also the billions of bytes of data that hum on laboratory information system servers.

At the symposium this morning, Strathmann will use a case-based approach to explore the benefits—and hazards—in digital quality data. Too often, laboratorians rely on a cut-and-paste methodology when it comes to metrics, and in some cases end up with plenty of numbers, but not much real knowledge of their quality systems, Strathmann emphasized. For this reason, it is particularly important for laboratorians to understand and continually evaluate their metrics, even if they have proven valuable in the past.

An example most labs can relate to is delta checks, which compare current and previous test results based on predetermined biological and time limits. Preanalytical, analytical, and mislabeling errors are among the problems labs employ delta checks to discover. However, as Strathmann and his colleagues demonstrated in a 2011 paper, delta checks don't always work the way they're supposed to (Clinica Chimica Acta 2011;412:1973–7).

"We found that many of these delta checks really don't do very well when it comes to actually detecting mislabels," Strathmann said. "This is one of those areas where some people are still essentially cutting and pasting out of their favorite reference, or another laboratory that they know of, but when you look at the actual performance, some of them have no chance of ever actually finding a mislabel."

In his study, Strathmann and his colleagues performed simulations using historical laboratory test results by randomly sampling pairs of specimens successively drawn from the same patient or two different patients, then evaluated the performance of delta check rules using various thresholds. While individual analytes varied greatly in their usefulness in detecting mislabels, many, such as sodium and potassium, had very poor performance.

"For some of these, it's really not worth the technologist's time to investigate a delta check when there's almost no chance there is a problem," Strathmann commented. "Back when these uses for delta checks were first proposed in the 1970s, it was a completely different world. Some of these errors were considerably higher than they are now because labs didn't have the same technology and processes we have now to control errors."

Joining Strathmann at the podium will be Geoffrey Baird, MD, PhD, who will explain how to automate new laboratory quality metrics for pre- and post-automation implementation.

Getting the Most Out of External QC

This morning, a symposium titled, "Preventing Critical Proficiency Testing Failures," developed with the College of American Pathologists (CAP), offers laboratorians practical advice on an area that not only affects quality, but also bears the weight of regulatory scrutiny. Brad S. Karon, MD, PhD, an associate professor of laboratory medicine and pathology at Mayo Clinic in Rochester, Minn., will moderate this symposium and dig into the details of the regulatory requirements for PT. Karon, who chairs the CAP continuous compliance committee that oversees the PT testing required by the CAP Laboratory Accreditation Program, will be joined by John Olson, MD, PhD, who serves on the same CAP committee and directs the clinical labs at the University of Texas Health Science Center in San Antonio.

Under the Clinical Laboratory Improvement Amendments (CLIA) in the U.S., labs must enroll in three PT events a year for each analyte defined as regulated by CLIA. PT events for regulated analytes consist of five samples, and a lab needs to produce results within range in at least four of five to pass a challenge. An occasional failure is not uncommon, and CLIA allows a lab to fail one of three events on a rolling basis. In other words, a lab needs to pass at least two in a row after a failure, irrespective of the calendar year.

Clerical errors should not be overlooked, as they come up more often than some laboratorians might imagine, Olson noted. For example, CMS and deemed accreditors such as CAP require an activity menu from labs showing every single test the lab performs. The accreditor cross-checks this list with a lab's PT enrollment to make sure each analyte is covered. "It's not an uncommon event at all," Olson said. "I don't think it's usually done intentionally, but as an oversight. Labs can avoid this by using checklists for the addition of a new test, which includes the development of a reference interval, validation, and the analytical measurement range of the test. The checklist also should include an item about enrolling in PT, because it's going to have to be done by the time the test is in production." An actual PT challenge is not required before a lab begins testing patient samples, but that test must be enrolled.

Another pitfall: sometimes labs will enroll in PT, but not in the correct peer group. For many analytes, PT is graded based on peer groups where results are pooled from a group of labs using the same instrument and reagent. Within peer groups, a lab's results must fall within range of a target value, determined according to a formula based on the mean of all participant responses.

Most importantly, when a PT failure does happen, labs should use a structured, organized approach, Olson emphasized. In fact, troubleshooting is required by CLIA and lab accreditors. "CAP provides a checklist that people can use, and some laboratories have developed their own, which helps provide a structured approach," he said. "You don't go into this with a preconceived notion of what the problem is, but instead really need something approaching a root cause analysis for that test, looking at all of the elements that go into generating the appropriate result."

Investigating a PT failure should also include all those in the lab that have a role in the testing that failed. It's up to lab directors to make the case that PT is an essential part of the lab's overall quality program. "It's important for managers as well as technologists to know that the leadership in the laboratory cares about this, and that they're going to pay attention to whether a failure is dealt with appropriately," Olson said. "And you do that not in a punitive way, but with the philosophy that something in the system has set the individual up to fail, and what you want to do is find out what there is in the system causing the problem that can be corrected so that not only this individual but also future individuals will not be set up to fail in the same way."

The bottom line, Olson emphasized, is that PT must be managed in a way that the lab is prospectively preventing errors. Labs should not pass up the opportunity to learn from a PT failure, even when only one challenge is unsuccessful. "Any time there is an unsuccessful result in PT, the lab should take that as a signal that it's worthwhile to go through this process and find out why that individual unsuccessful result occurred," he said.

The Quest for Quality
Wednesday's Educational Sessions

July 31

Morning and Afternoon Roundtables

Determining Troponin Cutoffs: Defining a Reference Population to Establish the 99th Percentile Upper Reference Limit of Normal
7:30 – 8:30 a.m. and 12:30 – 1:30 p.m.

Challenges of Quality Control in Modern Analytical Systems
7:30 – 8:30 a.m. and 12:30 – 1:30 p.m.

Delta Checks in the Clinical Laboratory
7:30 – 8:30 a.m. and 12:30 – 1:30 p.m.

Morning Short Courses

Calibration Verification, Analytical Range Validation, and Evaluation of Interferences: Meeting Regulatory Requirements and Assuring Test Quality
10:30 a.m. – 12:00 p.m.

How Do I Know If Laboratory Quality Measures Are Telling Me Anything?
10:30 a.m. – 12:00 p.m.

Morning Symposium

Preventing Critical Proficiency Testing Failures
10:30 a.m. – 12:00 p.m.

Afternoon Short Course

Autoverification of Clinical Laboratory Results
2:30 – 5:00 p.m.