Listen to the Clinical Chemistry Podcast
Dr. Allan Jaffe is the Laboratory Medicine Division Chair at the Mayo Clinic in Rochester, Minnesota.
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.
Clinical Chemistry is launching a new series of podcasts titled, "The Clinician's Perspective". Every month, Clinical Chemistry will sit down with the prominent clinician to discuss highly relevant clinical questions or applications in the field. Joining us this month is Dr. Allan Jaffe, the Laboratory Medicine Division Chair at the Mayo Clinic in Rochester, Minnesota. Dr. Jaffe's career has been dedicated to the investigation of biomarkers that characterize the pathobiology of acute cardiovascular disease. He joins us to discuss the proliferation of high-sensitivity assays and their adaptation in the laboratory.
So, Dr. Jaffe, let's get down to the basics, just what is a high-sensitivity assay?
Well, there has been a fair amount of confusion about this. And it’s actually a little easier in the United States than it is in some other places in the world. Assay sensitivity has progressively increased. We now have assays that are starting to be able to measure large numbers of normal subjects. The best definition to try and define and segregate out in these assays is the extent to which normals are detected.
So, Fred Apple has suggested that high-sensitivity assays should detect a substantial number of normal individuals. He's argued that there should be gradings of 50%, 75% and a 100%. And, most of the high-sensitivity assays do belong there. And, it does distinguish those assays from most of the assays that are available -- and, I would call them “contemporary assays” -- that usually detect less than third of such individuals.
Now, that the reason I say there is tension is two-fold: first, for some of these new assays like the high-sensitivity troponin T assay, the initial validation suggested it detected 80% of patients, but a subsequent validation suggested only detected about 35%. So there is sometimes differences depending upon the population that is tested and the specific equipment that is utilized. But, I think that in general, it’s probably the best categorization we have. In that sense, none of the high-sensitivity assays are available in the United States. The quote-unquote “high-sensitivity troponin” TSA is available in most of the rest of the world, save the United States.
The only other comment about it is that there is the implication that probably it’s not a 100% true that the ability to detect all of these normal subjects means that clinical sensitivity will also be enhanced. I think that is true for these assays, but it may not be one to one with their ability to detect normal subjects.
There is so much impressive new data available right now. Is this because of high-sensitivity assays?
Well, part of it is, and part of it is not. One of the things that has happened that has sort of confused people, is that there are a lot of recent studies starting to use our good solid contemporary assays that we all know and love whether they’re from Beckman or Ortho, or Abbott, or Siemens that are very, very good assays and have not previously been used at the recommended cut-offs, which is the 99th percentile of the upper reference range.
And so, I would argue that a couple of the very topical New England Journal papers that have come out and a recent paper looking at the response to therapy in patients who have potential acute coronary syndromes, all of which show that lower levels of troponin identify more patients at risk that are helped by these therapies or really are not high sensitivity assays. They’re standard assays simply now for the first time being used at what all of us have recommended for years to be the appropriate guidelines. And I think one has to be careful to not conflate those two or one will really be confused in terms of high-sensitivity and what is not.
I would point out that there had been many people who have said, “Well, we really can’t tell, so we’ll just lump them all together.” I think, although one can make that argument in some ways in particularly given what I told you earlier about the high-sensitivity troponin T assay perhaps not performing as well clinically as in the laboratory, that to do that means, that clinicians don’t have a real good sense of what high-sensitivity is. And as a matter of fact, in the last European Society of Cardiology meeting, there was a debate about high-sensitivity assays in which one of the speakers went out of his way, to specifically conflate those two. So I think we have to be very careful about that, that’s said there are exciting new data that’s coming up with high- sensitivity as well. And making that distinction is important.
Well, how exactly does a high-sensitivity assays help?
Well, they are somewhat more sensitive than the standard assays we have. So, they begin to identify a variety of different things. First of all, they identify more patients who have acute myocardial infraction. It’s not a huge number of increased patients, because as you begin to get down with more and more sensitivity, there are more and more other diseases that also can cost elevations, but there is a significant number of patients. For example, in a recent study, which I participated in, with a group from Spain, of the 35 MIs, we identified 10 more with high-sensitivity troponin.
Those are 10 patients who get the benefit of therapy, who get the benefit of being identified, who get the benefit of secondary prevention. So there is a benefit to those patients. But, the more exciting data that has been coming out with high-sensitivity suggest that troponin now will have a very important role in helping to define many, many more chronic disease states. For example, if one takes and utilizes high-sensitivity troponin to evaluate patients who have heart failure, it markedly improves the ability to risk stratify those patients, and perhaps that would lead eventually to therapeutic trials, where we’ll be able to use troponin monitor therapy.
If one looks at a community population and says, “Who’s going to get heart disease long-term?” It appears that high- sensitivity assays facilitate your ability to pick those patients out and perhaps then, in the long-term to develop preventative strategies.
There are recent data with a sort of pseudo high-sensitivity assay, since I’ve made the argument, we have to be careful about labeling these, that suggest that the propensity to have emboli, when you have atrial fibrillation can be very much more accurately determined by high-sensitivity types of assays and troponin values, than it can be by any other method. This will open up and increase, because of the increased sensitivity, the ability to monitor chemotherapy, much of which is cardiotoxic.
So there are a huge number of more chronic disease states that will now be amenable to being probed and looked at. The downside for the clinical community, will mean that it will mean that a given alleviation can’t be used in isolation. And one will need to be careful to distinguish those alleviations that are due to -- let’s say a heart attack or acute myocardial infraction -- from those that maybe due to any of these other processes. And that’s what makes clinicians weary about high-sensitivity assays.
From their point of view, this is simply -- in one sense -- noise that makes it harder for them to define patients who are having heart attack, which is obviously an emergency circumstance that we all want to be aware of. That said, there is a very facile way to look at that and that is by looking for a changing pattern of results, and patients who have anything acute not just a heart attack, or an acute coronary syndrome. But, any acute problem, whether it’s septic shock, which can cost elevations of troponin, whether it’s acute myocarditis, will provide you with a rising pattern of values.
So the important issue here is that you can make these distinctions, not perfectly yet, because we’re still defining the metrics to do that, but often make those distinctions by looking for a change in values as a way of determining who is acute and who is not. These values, unfortunately, are not are going to be something where someone is going to be able to say, “Here's the blanket statement: It’s x percentage,” because they’re going to all be unfortunately assay dependent.”
And right now, there is some controversy whether or not one would be better at doing this if one used absolute numbers, like the number of picograms or nanograms per mL that the value has changed versus using some sort of percentage criteria. And that's simply an area where there is still further work needed to define those metrics in a way that will be more convenient. But that strategy has been shown to markedly improve the specificity of elevations and allows one to bend them more intelligently into those that are more acute and those that tend to be more chronic.
What are some other challenges with high sensitivity assays?
Well, I think there's a huge educational challenge that we need to meet. Clinicians, for years, did not use the proper cut-offs for the assays that we had because they were worried about this issue we talked about earlier of noise. So there's great reluctance to use the assays at the recommended cut-off values where we know they provide optimal sensitivity, but where one has to be careful about specificity.
And so there are two major challenges. One is, we need in the increase of good patient care to make sure that clinicians use the recommended cut-off, the 99th percentile of a reference range, despite the fact that it will provide for an increase in the number of more chronic elevations.
And then, we need to develop and educate clinicians on an assay-specific basis. What are the metrics that need to be used to define a changing pattern, so that we end up with the best possible ways of triaging patients and it may well turn out depending upon the needs of a given clinic that one set of cut-offs may work for one group and not for another. And for example, if you're in the emergency department, you are very likely to want very good sensitivity and care little less about specificity.
So you may want to use a different metric to get sensitivity. But then, in the hospital, cardiologists may want better specificity and they may want to use a different metric. So, we need to develop all of those programs to try and help to develop that in an intelligent way, and the only way to do that is with the experience with these assays using them and developing those algorithms.
Finally, there is a real effort that will be facilitated I would argue with high-sensitivity assays, if we can figure out exactly how to use them to exclude myocardial infarction. This is a major problem for emergency departments where many patients sit around for prolonged periods of time being evaluated.
And some preliminary data would suggest that if your troponin is very, very low when you first come in, that the likelihood of subsequently having a heart attack is extremely low. You then begin to call out patients who perhaps have waited a little longer to come to the hospital and are at least six hours from their time of onset you may get up to almost 50% of the population who can be excluded and therefore, sent home or sent to additional evaluative procedures much earlier. And I think this is an area where we need to develop additional metrics.
What has traditionally happened is, people have said, "Well, we ruled them all in pretty rapidly. Therefore, they're all ruled out pretty rapidly." And what's starting to develop is the idea that, well, maybe we are -- diagnosed them all pretty early, but if you're looking at really not wanting to miss very many, maybe there needs to be different strategies for rule-out and rule-in and this is a major challenge.
Dr. Allan Jaffe is the Laboratory Medicine Division Chair at the Mayo Clinic in Rochester, Minnesota. He's been our guest in this Clinician's Perspective from Clinical Chemistry. I'm Bob Barrett. Thanks for listening!