Interpreting results of immunoassay-based methods frequently presents a challenge for physicians, especially when caring for patients at multiple institutions that use different assay platforms. For many analytes including tumor markers, endocrine hormones, and cardiac biomarkers, results generated on different platforms are not directly comparable. This is due to the absence of a universally accepted reference material, which manufacturers need to calibrate their assays to a common standard.

Instead, test results must be interpreted using assay-specific reference intervals—a concept that comes naturally to clinical laboratorians but often is foreign to many physicians and patients. This lack of uniform results causes confusion that can adversely affect patient care, particularly when patients are diagnosed at one hospital but pursue follow-up care elsewhere. For example, does an increased CA-125 value at follow-up at a different institution reflect disease progression or simply differences in assay calibration? A lack of standardization also makes it impossible to transfer diagnostic cutoffs from one institution to another unless the assay platforms are identical.

Given the confusion associated with non-standardized assays, why haven’t all immunoassays already been standardized? For which analytes would standardization make a particularly positive impact on patient care? These questions were addressed yesterday in a short course, “Challenges and Clinical Impacts of Standardization of Immunoassays,” led by Bernard Cook, PhD, Catharine Sturgeon, PhD, and Alan Wu, PhD.

According to Cook, “harmonization is possible. We’ve done it and we know we can do it again.” He pointed to several recent examples of successful assay standardization efforts, including thyroid function and vitamin D testing. He also noted ongoing programs to standardize testosterone and estradiol assays. Despite these success stories, he cautioned that “standardization takes a big effort. It requires collaboration among lots of different groups and often can be overwhelming.” In addition, manufacturers often are reluctant to commit to standardization because introducing a new assay requires resubmission to FDA—a time-consuming and expensive process.

Commutability of reference materials is another problem. For example, after significant efforts by AACC led to the development of a NIST Standard Reference Material (SRM) for cTnI, it later was shown to be less stable than native patient pools. While use of this SRM as a common assay calibrator may help move in the direction of standardized cTnI assays, it is likely that its partial commutability will still lead to assays with very different performance characteristics. Until better reference materials are available, Wu indicated that “we can at least standardize the cTnI 99th percentile within each manufacturer.” He reminded the audience of the importance of contributing blood to AACC’s Universal Sample Bank to facilitate this effort.

While efforts to fully standardize cTnI assays continue, the potential for diagnostic confusion will remain. Dina Greene, PhD will discuss the impact of non-standardized cTnI assays on patient care at a Brown Bag session this afternoon, “Troponin 99th Percentiles: More than Just a Single Number.” The Third Universal Definition of Myocardial Infarction requires a rise or fall in cardiac biomarkers with at least one value above the 99th percentile upper reference limit. However, the absence of standardization means that each cTnI assay has a different 99th percentile, resulting in different diagnostic cutoffs. Greene will describe how the interpretation of cTnI test results in the context of an incorrect 99th percentile can lead to misdiagnosis and adverse patient outcomes.