Listen to the JALMTalk Podcast



Article

Danyel H. Tacker and Peter L. Perrotta. Quality Monitoring Approach for Optimizing Antinuclear Antibody Screening Cutoffs and Testing Work Flow. J Appl Lab Med 2017;1:678-689.

Guest

Dr. Danyel Tacker, Associate Clinical Professor of Pathology at West Virginia University and Section Director of the Chemistry and Mass Spectrometry Laboratories at JW Ruby Memorial Hospital in Morgantown West Virginia.


Transcript

[Download pdf]

Randye Kaye:
Hello, and welcome to this edition of “JALM Talk” from The Journal of Applied Laboratory Medicine, a publication of the American Association for Clinical Chemistry. I’m your host, Randye Kaye.

Antinuclear antibody (ANA) testing is used to evaluate patients for connective tissue disease. The gold standard methodology is the immunofluorescence assay (IFA) but this method is somewhat manual and therefore time consuming for technologists. Due to this, more automated methods like enzyme immunoassay (EIA) have been developed. In some laboratories, EIA is utilized as a screening method, whereby equivocal or positive samples are reflexed to IFA testing. However, for these methodologies, cut-offs, and appropriate dilution factors have to be implemented that optimize the sensitivity and specificity of the methods for detecting the presence of autoantibodies.

An article called “Quality Monitoring Approach for Optimizing Antinuclear Antibodies Screening Cutoffs and Testing Workflow” published in the May 2017 issue of JALM discusses one laboratories verification and implementation of cut points for EIA and the appropriate minimum dilution for IFA in the form of a testing algorithm. This was instituted in collaboration with the rheumatology service in order to optimize both the clinical utility and the laboratory workflow of this testing.

The first author of this article is Dr. Danyel Tacker, an Associate Clinical Professor of Pathology at West Virginia University and the Section Director of the Chemistry and Mass Spectrometry Laboratories at JW Ruby Memorial Hospital in Morgantown, West Virginia, and she is our guest for today’s podcast. Welcome Dr. Tacker.

Danyel Tacker:
Thank you, Randye.

Randye Kaye:
First of all, what inspired you to undertake this study?

Danyel Tacker:
It really was more a project of necessity than anything. For many years, we had been performing immunofluorescence ANAs at our institution and we were really just getting hammered at the workbench. So, volume was increasing to the point where we were having a really hard time staying on top of it. At the same time that we were starting evaluate all of the testing that we were doing for ANAs, we were looking at how caregivers were ordering their testing. And we were seeing that there was a lot of ANA being ordered that might be more a screening capacity than a confirmatory capacity.

So, with this coinciding with some implementation of an automated EIA platform in our lab, we decided to test an ANA kit on the EIA platform and start looking at building an algorithm that would combine the strength of a screening test by EIA with the confirmatory strengths of IFA. And that’s kind of where this took off, and what inspired us to create the project.

Randye Kaye:
Yeah, that makes a lot of sense because certainly workflow is a common challenge in laboratories. So what did the implementation of the algorithm require?

Danyel Tacker:
In the laboratory, we would definitely need a sensitive ELISA kit, an EIA kit, that would detect at a very low level positive results. So we wanted to be able to capture positive patients, but we then needed to be able to discriminate the true positives from the false positives and that’s where the IFA came in. So, we needed a good approach, we needed a solid set of rules for how we were going to move from the EIA screening to the IFA confirmation. We needed to get buy-in from our rheumatology team because these are the providers that get all of the consults from all of the users that are accessing the algorithm.

And if it isn’t clear in how it rolls out, if the results are not clear, rheumatology is going to get referrals that they shouldn’t. So we had to mindful of their needs as specialists. We needed to retain their access to immunofluorescence, the IFA technique, and to give them access to EIA if they should prefer to do any retesting. So we needed a lot of clinical buy-in on this too. That was the second thing we needed. And we thought that the skepticism that rheumatology provided was really good for the process because in rheumatology, in all of the consensus guidelines and as you mentioned in your introduction, the IFA method is the confirmatory gold standard method that the rheumatologists really trust. And so, we needed to be sure that we had that focus and that lens provided by rheumatology to judge the adequacy of that algorithm.

Randye Kaye:
So, if you tailored the availability of ANA testing according to the clinicians that you service, so where did you derive your goals for judging the success of the testing algorithm?

Danyel Tacker:
We really wanted to see if we could reflex at a rate that somewhat mirrored our positivity rates with IFA before the algorithm was implemented. So we had kind of a circular logic in creating these goals. We pulled some preimplementation data—IFA only testing from the exact same system that we were about to service—and we looked at the rates of positive results, negative results and compared them to disease data. So I was in charts, reading all of these clinical notes and looking for disease.

The first dataset that we present in the paper is this preimplementation data that helped us create these goals. And what we saw was that we would reflex probably about 20% of the time and we thought that was pretty good. We saw that true clinical positive cases were coming from the IFAs about 10% of the time, great. So we set those expectations and then we launched the algorithm and the outcome wasn’t exactly what we predicted it would be.

Randye Kaye:
Okay, go on.

Danyel Tacker:
Yes. So then, the second phase of the dataset that we present in the paper is that first implementation dataset. It’s right when we have a fresh algorithm and we’re doing all of this testing and our reflex rate was over 60%. So doing that, we really were reflexing much more than we had anticipated. We were getting positives that were just unexplainably high and we started to do the chart reviews in real time and that’s why the quality monitoring approach is mentioned in the paper as we were doing daily dashboards. We were pulling all of the ANA orders we were looking in the charts. Why did this get ordered? Why was it positive? Is it a false positive? Is it a true positive? What’s happening here?

And we saw that we had a very high false positive rate, and this was confirmed by the IFAs because IFAs were reflexing. When we were getting the IFA results, it was saying “No, sorry, your screen was a false result.” We were very disappointed by this and we presented it to rheumatology and we said, “This is what our status is” and we started talking about cutoff and what kind of cutoffs should be really be used. I had been doing some literature searches in the background and found early papers, I cite one in the discussion, that mentioned the very kit we were using for the EIA and it also mentioned altering the cutoff. And from what I could tell, from the prediction in the paper, the modeling that I was getting from looking at these results in real time was showing that the cutoff we should be using is higher and actually pretty close to what they put in that paper almost 10 years before.

So we presented this information to the rheumatology team and they said, “I think we should update the cutoff. Definitely, we should do that. In the meantime, we’ve been asking you to increase your minimal dilution on IFA for years to cut back on false positives. Could we do this at the time?” And we thought, that’s the best way to lock this algorithm together because if we change the EIA cutoff without thinking towards what the IFA needs, then we didn’t thing that it would be good to make one change and then the next. We made both at the same time.

The third validation set in the paper then is the postimplementation set. This is after we changed the cutoffs. We updated the algorithm and the rates of reflexing went from over 60% down to about 36%. So it’s a little higher than it was when we were only using IFA, what we predicted, but it was definitely better. And we noticed that we increased the accuracy of both of the tests by the doing the increase in the cutoffs and we did not get extra false negatives, we did not lose anything, at least from what we could tell, by making these changes. Rheumatology was extremely pleased toward the change too. That’s why we really kind of decided to publish our findings.

Randye Kaye:
Wow! So it’s truly a process and collaborative as well and rheumatologists are very busy clinicians, I’m sure they looked forward to that update. So it sounds like, but just confirm this, would you say that updating that algorithm really helped you to find that right balance in test performance for the rheumatologist?

Danyel Tacker:
I think at our site, it really did, because the rheumatologists are not getting the consults from these false positive screens. Sometimes a generalist, a provider who is ordering the ANA screening and may not understand it’s in two phases, will see the screen positive and say, “Oh, we need to refer you.” And maybe they have some clinical information that’s telling them that the referral might be appropriate. They see the patient, we don’t in the lab of course. They may have some inside knowledge that’s driving them to do the referral. But if you’ve got a false positive screen in a patient with pain and fatigue, have you really eliminated all of the other potential causes, and is rheumatology going to have a fight ahead of them in trying to figure out what’s going on?

What rheumatology is telling me even now, we follow up often, they’re telling me that they’re getting fewer referrals for things that don’t really appear to be clinically consistent. They’re actually getting more challenges in these referrals now, but they’re getting fewer of them in total. So they’re getting the right kind of patients referred to them now and that is really what we were trying to drive, through implementing the algorithm and in working with rheumatology on the project. So I think we really did find that balance for us. I think every site is going to have to work very closely with the rheumatology team to look at the goals though if they try a similar approach.

Randye Kaye:
Okay. Thank you. Very successful outcome so far, and a very interesting update. Thank you so much for joining us today.

Danyel Tacker:
Oh, thank you for having me, Randye. I really appreciate it.

Randye Kaye:
That was Dr. Danyel Tacker from the University of West Virginia talking about the JALM article “Quality Monitoring Approach for Optimizing Antinuclear Antibodies Screening Cutoffs and Testing Workflow” for this podcast. Thanks for tuning in for “JALM Talk.” See you next time and don’t forget to submit something for us to talk about.