Listen to the Clinical Chemistry Podcast



Article

R.H. Christenson, D.M. Bunk, H. Schimmel, J.R. Tate, IFCC Working Group on Standardization of Troponin I. Point: Put Simply, Standardization of Cardiac Troponin I Is Complicated Clin Chem 2012; 58: 165-168.

Guests

Dr. Rob Christenson is the Director of Clinical Chemistry Laboratories at the University of Maryland Medical Center and Dr. Fred Apple is the Medical Director of Clinical Laboratories at Hennepin County Medical Center in Minneapolis.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry. I am Bob Barrett.

Troponin I is routinely used in the diagnosis, prognosis, monitoring, and management of disease, and has been incorporated into professional guidelines for cardiac disease diagnosis. Yet no standard procedure exists for assay of cardiac troponin I. Standardization is difficult because there are few reference measurement procedures for such analytes. Few reference materials have been developed and some of the reference materials that are available can be used only for assay calibration with restrictions.

Will a standard assay procedure for measuring cardiac troponin I ever occur? Joining us to provide deferring views on this issue are Dr. Rob Christenson, the Director of Clinical Chemistry Laboratories at the University of Maryland Medical Center and Dr. Fred Apple, Medical Director of Clinical Laboratories at Hennepin County Medical Center in Minneapolis. Their Point/Counterpoint article appeared in the January 2012 issue of Clinical Chemistry.

We will start with you, Dr. Apple. What are the major challenges that lead you to believe that standardization of cardiac troponin I assays will not be successful?

Fred Apple:
Cardiac troponin is a complex molecule. It exists in several multiple forms. There is a free form, free troponin I, complex form of troponin I-C, there is a ternary complex troponin TIC, and of these forms that can circulate, there are pathological processes that cause oxidation, reduction, phosphorylation, and C- and N-terminal protease degradation. So you can get a picture of the concept of all the forms that could be circulating.

In addition, there are interfering substances, compounds, proteins mostly, that will affect antibody binding. It could be the anticoagulant heparin, it could be an anti-cardiac troponin I autoantibody, and it could be heterophile antibodies. All of these different formulations of the troponin complex make it, no pun intended, a complex molecule.

So, therefore, there are challenges for the ability to use an assay to obtain an equal molar reactivity, some kind of a cross-reactivity with the antibody used and manufacturers’ assays for capturing and detection of this cardiac troponin I molecule, which as we just explained, is not a simple one protein entity.

So, therefore, if you were to look through several papers that have been published and I refer to, for example, there is a Table I and II in a recent 2012 ‘Clinical Chemistry’ Mini-Review article by Paul Collinson and myself. It will highlight in that table, in both of those tables, all the different antibody approaches that are used by the multiple manufacturers that are looking to detect this troponin I article and it’s interesting that you will see a huge diversity of capture, detecture antibody concepts which leads one to believe there is no one uniform epitope area that people go after to measure these multiple forms.

And at present to the best of my knowledge, no company has truly demonstrated how their assays will perform when it’s looking at the multiple circulating forms that occur in the blood.

We know that the multiple forms that occur in the blood follow what is released from the myocardial tissue and what is released from the myocardial tissue will be dependent on whether or not that is a heart that’s never been injured or a heart that has known myocardial pathology because changes also occur in the tissue that will be reflected that occur when they are released into the blood.

So if we were going to be looking at a blood sample, the pattern of troponin forms will be different from an acute injury in a healthy heart, in a rising pattern, compared to, let’s say, an acute injury to a sick heart or a failing heart. There will be a heterogeneous mix of different molecules.

I think that if we look back at the initial studies that the American Association for Clinical Chemistry and their committee for the standardization of cardiac troponin I that was started back in the late 1990s by Rob Christenson who served as a Chair, resulted in two publications in 2001 and 2006 in ‘Clinical Chemistry’ and what this committee looked at was that we actually developed and worked with the National Institute of Standards and Technology, NIST, a standard reference material 2921, which was a native cardiac troponin TIC complex.

And what we learned from that, when we looked at the different forms and the different assays, at best, even after some type of traceability to the standard, at best, we took a 20 to 30-fold difference among assays down to a two, to three, to four-fold difference, which I think underscores the challenges ahead for trying to standardize a cardiac troponin I assay.

Bob Barrett:
What are your thoughts regarding the use of the serum-based secondary reference material as proposed by the IFCC working group on standardization of cardiac troponin I as their approach to traceability?

Fred Apple:
The efforts were started to approach standardization of cardiac troponin I started, as I mentioned earlier to my first question, back in 1990 and now it's about 13 years later and most of the same people that were part of that AACC subcommittee on troponin I standardization are part of this IFCC working group for the standardization of the troponin I.

It’s a challenge and if you look at the membership of both groups, they are international experts from governmental scientific agencies, In vitro diagnostic scientists who have spent their careers on cardiac troponin I, and academic PhD and MD scientists, and I think developing a serum-based secondary reference material, though it’s a great concept, I think it's going to be a huge challenge and I wish them the best of luck.

As I noted in the past that serum-based secondary reference materials have been used in procedures for a few proteins for standardization. However, the scientific approach that the IFCC working group is using will, in my opinion, not likely be successful because of the long history of -- we have shown that this has not been a very successful protein to try to get standardization.

And what are the rationales behind that? As I mentioned, troponin I is a very heterogeneous molecule, number one; number 2, we now know that it goes under substantial and very non-predictable modifications in both the heart and the blood following myocardial cell death; and we thirdly know that the effects of these changes are not well described by any commercial assay in the literature.

Fourth, we still do not know what the ideal epitopes are to be used to approach to build a reference assay. This has not been established, and I applaud the IFCC working group to try to develop such an assay and I think they have their challenges ahead. Five, the task of clearly defining an analyte that represents a clinically relevant form still in the year 2012 has not been very well or thoroughly described in the literature.

There are probably less than 20 patients that have actually performed and studied that we will look at, what all the multiple forms occurs at different timings, serial timings after myocardial infarction.

So we really don’t have a good reference measurement procedure that shows the same response to the different forms. So we have now what we think of somewhat of a primary reference material, we talked about this NIST SRM 2921 and we also know that it has not been found to be commutable with the majority of the assays as they talked about. This was published by the AACC subcommittee on Troponin I Standardization.

It has been useful by many manufacturers as a material for traceability and if we are also calibrating a secondary reference material procedure, I think, as proposed by the IFCC working group, in my mind, traceability does not equate standardization, and as good as a outline that they have produced in their Figure 1 if you look at Rob Christenson’s Point paper, it’s a very elegant approach to the metrologically traceability of performing this protocol, I do not think it’s going to really result in the standardization that they are hoping for.

Now what we have found and this is actually not what I have found, but what the IFCC working group has found, some of their members have actually published a paper that even questioned that the IFCC working group that has used the Standard Reference Material 2921 is prone to different degrees of degradation and has not adequately addressed this issue in their role of using this as a material to standardize or calibrate the Secondary Reference Material.

Finally, I think one thing that’s not clear from the IFCC working group is that when they talk about the serum-based commutable reference material, what is this material going to consist of? Is this serum plasma going to be defined as the collectible aliquot?

Over what time period - over the rising time period, over the falling time period? And is it going to be just based on AMI patients or the Troponin molecules they release be different if the pathological process is different?

So, there are still many, many questions without, I think, realistic answers, and I think the efforts are truly academic and will produce some great science. I would hope this would work but as a secondary reference material, I think the troponin I molecule is a little too tricky to be achieved in realistic time frame.

Bob Barrett:
Well, clinically, what would be the impact of standardization or lack of standardization of cardiac troponin I assays for reporting cardiac troponin I concentrations by different assays in the marketplace?

Fred Apple:
The clinical impact of not having a standardized assay, I think, would be minimal because that’s what we’ve been living with. I mean, theoretically and no arguments, the concept would be ideal if we had a standard material at a standardized assay.

To do this, I think the only possible way to do this, is that every manufacturer would just back up and reformulate their assays and there would be a universal acceptance of what antibodies they would use for capturing and what antibodies they would use for detection purposes, and then they could use hopefully a stable reference material to calibrate their assays. To me, that is the only way they will get adequate cross-reactivity across all the forms we talked about.

So, the impact clinically is that what we have is what we have to live with. So, we do know that it’s important that you completely understand the following characteristics of the assay you're using and not try to cross over to what other assays may go for number.

So no two cardiac troponin assays should be considered to be alike for either one, the diagnostic accuracy, cutoff values used, meaning that every assay have to clearly define a reliable reference value at the 99th percentile, and then perform appropriate diagnostic accuracy studies looking at clinical sensitivity and specificity.

So, people know how their assay performs and know their cutoffs and know their normal ranges. I think implementing that in the practice is not affected. The same thing is held true for the prognostic management and therapy impact on patients. Know your assay; don't try to look to other cutoff values to implement for an assay that does not correlate with the one that you have in your laboratory.

I think the major, and I’ll call it major, impact that standardization could have in the world is especially in trial. There are a large number of trials that are using, in this case cardiac troponin, as a circuit biomarker to define diagnostics, especially outcomes, and with the inability of a standardized assay, the cutoff value used, as far as determining primary and secondary outcomes and goals, are really all over the place.

So, I think that would be a very strong rationale of how it would really help the field, but again if you know your assays and you have a working group within your, let’s say, trial group to understand cutoff based on what the universal definition of AMI have recommended at the 99th percentile, there is probably some workaround you can get at.

The other area that standardization would truly be beneficial would be for journals and that means that if a laboratorian or physicians were to read an article on a troponin assay, they’d be able to then move a concentration from one assay with the same interpretation to a concentration from another assay, because most clinicians and providers are not astute enough to understand that a cutoff for one assay, which is 0.1, is not the same as the cutoff from a different assay, which is 0.01.

So, there’s a confusion from translation from two different assays to cutoff and often can cause a misinterpretation of how a patient’s diagnosis could be interpreted, especially if physicians are moving between hospitals.

So, I think at present we have to learn to live with the information we have about the specific assay that we are using in our laboratories and clearly, we should use the evidence-based literature to navigate how we manage patients, and as a laboratorian, I think it's very important we educate our providers to the fact that at present there is no standardization and that manufacturers need to be very clear that even within the same assays from a company that the assays might not give the same diagnostic accuracy and performance.

Bob Barrett:
Thank you, Dr. Apple. Now, clearly there’s a need for further activity on troponin standardization. Joining us now is Dr. Rob Christensen. Dr. Christensen, do you agree that standardization is an area that needs further work?

Rob Christenson:
I and, I think, most other laboratorians would be hard-pressed to think of an analyte with greater clinical important value and impact than cardiac troponin. In addition to the use of troponin for diagnosis, risk stratification and management of acute myocardial infarction and acute coronary syndromes, there is a rapidly developing body of evidence that troponin levels may have a role in the management of heart failure and in determining risk for developing heart failure.

Further, in addition to the risk of future events indicated by a snapshot troponin value in a short term, longer term, temporal changes in troponin over many months are also important.

An increase in troponin over time confers increased risk. whereas a decrease in troponin indicates lower risk. High troponin values, in fact, may respond to treatments aimed at mitigating cardiovascular risk. However, just as with cholesterol, hemoglobin A1c, PSA, and other biomarkers, standardized values are essential for developing and implementing clinical guidance based on special values and changes over time.

Can you imagine a situation in which physicians managing the use of statin therapy with LDL cholesterol levels, if LDL values differ substantially? Clinicians would have to remember which hospital they were in that day and which assay the local lab was using in that time frame and so on.

Although understanding the characteristics of the local tyroponin assay is, of course, vitally important from a public health perspective, being satisfied with the current troponin situation and differences that may be encountered between assays is simply not appropriate from a clinical perspective.

We must have tyroponin values for single specimen for the same whether they are measured in Sarasota, Sydney, Singapore, or Sarajevo by any accepted troponin method. Patients clearly deserve better than the current situation.

Bob Barrett:
What is the difference between harmonization of an analyte such as troponin and its standardization?

Rob Christenson:
Harmonization and standardization for clinical laboratory analytes are similar in that they have the intended outcome of the same reported value for an analyte whether it’s measured in Sarasota or in Singapore. They do differ in important ways and by formal definition, however. Standardized analytes imply traceability to a robust primary reference measurement procedure, for example LC tandem mass spec.

Establishment of a primary reference procedure allows completion of the standardization chain that includes a primary reference material, a secondary reference method, secondary reference material, and then a value transfer process that yields, in the end, traceable calibrators for use in routine field methods.

Bob Barrett:
So why not go the harmonization route?

Rob Christenson:
Well, harmonization would be an improvement over the current situation, for sure. However, harmonization implies that analytes are missing the link to a primary reference measurement procedure. The rest of the chain that is a secondary reference method, secondary reference material, and a value transfer to a routine or field calibrators is the same as with standardization. So, standardization is superior because establishment of a primary reference method and primary reference material help assure that Troponin values or any other analytes will be stable over time.

If the secondary material or primary reference material becomes depleted for example, it is possible to replenish them using very rigorous stable methods. On the other hand, with harmonization, if the secondary material changes over time, the values for the routine methods may change also, which is not the situation we want to end up with.

So when a lot of secondary material becomes depleted or undergoes modifications over time, replenishing them in a harmonization scenario is much more complicated because there is no primary reference method or material in the traceability chain to rely on for value assignment.

Bob Barrett:
Now, Doctor, the purifying troponin material available from NIST Standard Reference Material 2921, is a complex of all cardiac troponins. Isn’t it true that only a small fraction of cardiac troponin exists exactly in this form in human serum and isn’t this a problem with efforts at standardization using this SRM?

Rob Christenson:
So no, a thousand times no. The fact that standard reference material 2921 does not represent the isoform mix for the majority of troponin molecules in human blood is not a problem or issue. The reason why has to do with the intended use of the standard reference material.

According to metrological protocols, the primary reference material does not need to exactly mimic the analyte in the natural metrics or even be commutable with most commercial methods to serve as a primary reference material. In the case of troponin, this is because SRM 2921 will be used for the sole purpose of calibrating a secondary reference measurement procedure. Since SRM 2921’s identity, purity, and protein concentration had been rigorously determined, values from measurements with the calibrated secondary procedure are traceable to SRM 2921.

Then the calibrated secondary method will be used to assign values and uncertainty to a panel of commutable serum-based reference materials derived from patients, which do indeed have the spectrum of troponin isoforms. In this case, the planned secondary material is a serum pool.

So, SRM 2921 will not be used to directly assign values to the working calibrators for commercial assays. Rather, SRM 2921 will be used as a primary reference material to calibrate the secondary method and assign value to serum based commutable materials that contain the isoforms found in heart disease patients. These commutable secondary materials will be used by manufacturers to assign values to calibrators for use in their routine field assays.

Bob Barrett:
Doctor, what do you see is the outcome of your efforts and the desired status of troponin testing that the working group hopes to achieve and will the results be relevant to the growing number of high sensitivity and new generation troponin assays?

Rob Christenson:
Well, the desired outcome for the working group is the laboratory and clinical environment in which troponin measurements are stable across time, and results for any single specimen are the same, within the analytical error, of course, whether measured in Sarasota, Sydney, Singapore, or Sarajevo by any accepted troponin method.

As for new generation assays, when commercial assays are improved and evolved, the efforts to standardize them must also improve and evolve. Once the foundation for standardization is established by the working group, however, it should easily be possible to use the knowledge gained through this and other standardization efforts to support high sensitivity troponin assays, super sensitive assays, and whatever may follow.

Bob Barrett:
Dr. Rob Christenson is the Director of Clinical Chemistry Laboratories at the University of Maryland Medical Center and Dr. Fred Apple is the Medical Director of Clinical Laboratories at Hennepin County Medical Center in Minneapolis. Both have been our guests in this podcast from Clinical Chemistry. I am Bob Barrett, thanks for listening.