The Phantom of the Opera, the longest-running show in Broadway history with nearly 30 years of continuous production, has captivated generations of fans with its compelling story and soaring score by Andrew Lloyd Weber. Though cut of an entirely different cloth, Sunday mornings’s AACC University course, “Trust But Verify: Getting the Most of Verification Protocols for FDA-Approved Methods”—part of the longest running workshop at the AACC Annual Scientific Meeting—enjoys similar durability and cachet among clinical laboratorians for its enduring relevancy and practicality.
Sten Westgard and David Koch, PhD, DABCC led participants through the processes necessary to verify the performance of assays prior to implementing them in clinical laboratories. Food and Drug Administration (FDA) approvals are one pathway to putting laboratory tests into clinical use, but even with FDA approval each laboratory using a test must verify that it performs to manufacturer claims in the hands of the laboratory’s staff and in its operations.
A lab’s environment (temperature, humidity, altitude), staff (training, expertise), and instrumentation (age, maintenance) all provide variables that affect an assay’s actual performance in each clinical laboratory.
“The FDA approval process is not perfect,” Westgard noted. “The 510K process was not intended to evaluate safety, but only to demonstrate substantial equivalence. The process sometimes simply shows the product is just as bad as something already on the market.”
Westgard walked participants through establishing goals and noted that approximately 70% of the recommendations for goals are based on what the technology can provide, not what would be desirable based on clinical needs or biological variation. Laboratories can use CLIA, RCPA, Rilibak, and other guidelines to establish the allowable error for the test under consideration.
Westgard then discussed how testing volume impacts error rates. Some large reference labs may test 5 million tests per day: If they accept a 1% error rate, that would equate to 15,000 errors per day. As such, the goals selected for total allowable error (TEa) in the clinical laboratory should strive for an acceptable analytical variation with an standard deviation (SD) of TEa/4—or a more desirable situation where the SD is less than TEa/6.
Koch focused on the use of Clinical Laboratory Standards Institute (CLSI) evaluation protocol (EP) guideline documents to design proper verification experiments which evaluate accuracy, precision, reportable range, and reference interval. These documents include: EP15-A3, “User Determination of Precision and Estimation of Bias;” EP09-3, “Measurement Procedure Comparison and Bias Estimation using Patient Samples;” and EP28, “Defining, Establishing, and Verifying Reference Intervals in the Clinical Laboratory.”
Koch pointed out several times that the selection of samples to use in the verification studies is not to be taken lightly. The samples must reflect the ones that will be used in actual testing. And while QC or proficiency testing samples can be useful in hitting certain targets, their matrix may lead to misleading data. Koch also emphasized that it is important to recognize that in most cases, the method comparison studies are simply comparing against the “existing method” used in the laboratory and are not a true assessment of accuracy.
While relationships among in vitro diagnostic manufacturers, FDA, and clinical laboratorians differ markedly from Cold War-era U.S. and Soviet Union affairs, President Ronald Reagan’s catch-phrase based on the Russian proverb “Doveryai, no proveryai” (Trust, but verify) still applies given the regulatory requirements that all CLIA-certified laboratories verify the performance of FDA-approved assays before them putting into use.