Listen to the Clinical Chemistry Podcast



Article

Santica M. Marcovina, Noémie Clouet-Foraison, Marlys L. Koschinsky, Mark S. Lowenthal, Allen Orquillas, Michael B. Boffa, Andrew N. Hoofnagle, and Tomáš Vaisar. Development of an LC-MS/MS Proposed Candidate Reference Method for the Standardization of Analytical Methods to Measure Lipoprotein(a) Clin Chem 2021; 67:3 490–99.

Guest

Dr. Tomáš Vaisar, Research Professor of Medicine at UW Medicine Diabetes Institute at the University of Washington in Seattle.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I’m Bob Barrett.  Lipoprotein A, often referred to as lipoprotein(a) is an independent risk factor for cardiovascular disease.  It is basically a low density lipoprotein or LDL particle with an added protein attached called apolipoprotein(a).  This protein is a complex one, sharing a high sequence homology with several regions of plasminogen, including the protease domain.  It is also notoriously difficult to measure accurately.  In the March 2021 issue of Clinical Chemistry, an article describes the development of an LC mass spec candidate reference method measurement of lipoprotein(a) that may help address this problem.

The senior author for that study is Dr. Tomáš Vaisar. He is Research Professor of Medicine at UW Medicine Diabetes Institute at the University of Washington in Seattle.  His research focuses on mass spectrometry assays to investigate lipoprotein metabolism in a range of diseases, including cardiovascular, diabetes, and Alzheimer’s disease.  Dr. Vaisar is our guest in this podcast. 

So, first of all, doctor, what is lipoprotein(a) and why is it important clinically?

Tomáš Vaisar:
Lipoprotein(a) is the most common inherited form of dyslipidemia, and it’s one of the major risk factors for premature cardiovascular disease.  Therefore, the measurement of Lp(a) levels is of great importance in clinical practice.  Availability of current accurate methods for measuring Lp(a) that are standardized among clinical laboratories is critical because for clinical use, a specific cut of values are used to identify patients at increased risk for CVD.  At present, multiple immunoassays, mainly performed on automated instruments, are available. However, due to a variable degree, the accuracy of all the commercial available methods is impacted by the large size polymorphism of Apo(a) which is a part of the Lp(a) particle.

So, Lp(a) particle consists of two parts: its LDL-like particle which consists of Apo(b) and a lipid core, and in the Lp(a), lipoprotein(a) another protein called apolipoprotein A (or apolipoprotein(a)) is attached to it by disulfide bridge.  So it is a very complicated molecule.  Moreover, the Lp(a) or the Apo(a) is genetically determined.  It’s highly heterogeneous in size because there is one good domain in the protein sequence, which is repeated, and a person can have between three and forty repeats of this particular domain within the sequence of this Apo(a) protein.  On top of that, you have two alleles coming from your parents from mom and dad, and that means you can have an 80% of the population that have two different isoforms of the Apo(a).  One can be very small, one can be very large, so it makes it a really challenging molecule to measure.  So, for the reasons of this large polymorphism of Apo(a), the Lp(a) levels are underestimated or overestimated in the samples with the Apo(a) size that is either smaller or larger than your calibrator because most of the current immunoassays are dependent on antibody antigen interactions.

Moreover, historically, the concentration of Lp(a) was expressed in milligram per deciliter, in which a total mass of the Lp(a) particle, including the protein, its lipid, and its carbohydrates attached to the protein is considered; but in reality what is measured is only that Apo(a) which is the -- I should say a signature of the Lp(a) particle because the Apo(b) LDL-like particle is basically identical to LDL.  So, the calibrators are very important and because of the polymorphism assigning the accurate concentration is very challenging.

Currently, there’s really not a reference method.  In the early 90s, Dr. Marcovina, collaborator on this paper, the first author on the paper, developed the first immunoassay that is completely insensitive to Apo(a) size variability by generating a monoclonal antibody that does not react with this variable domain called kringle IV type 2.  So, the ELISA assay is insensitive of the size differences and on top of that, the Lp(a) values from this assay are reported in nanomoles per liters, so molar concentration that accurately reflects the number of circulating Lp(a) particles, and is not variable with the mass of the particle.

This ELISA was calibrated on a carefully isolated Lp(a) particle with value assigned by immunoassay analysis and we’ll get to it later probably. That’s a very important point.  So, since 1998, it was used as a gold standard method to assign target value to a reference material or proposed reference material by the W.H.O.  But even after the availability of this reference material in this ELISA, the standardization of the assay has been hampered by the high level interference of the Apo(a) size variation which is present in most commercially available assays.

Bob Barrett:
And why is it important to develop reference methods for lipoprotein(a)?

Tomáš Vaisar:
It may seem unbelievable, but after almost four decades of research and standardization activities, there is really not a reference method for assay standardization for the Lp(a).  And as I mentioned earlier, it is really important that the assay is standardized between multiple laboratories, multiple -- across the whole world because a specific kind of value is used to tell you about your cardiovascular risk.  So, measuring accurately the Lp(a) value is really important for this reason because the ELISA method from Dr. Marcovina lacks some of the characteristics to be considered a referenced method, we’ve been discussing with Dr. Marcovina several years ago a possibility of developing this reference method based on the LCMS. Because that can -- for certain reasons, actually had all requisites to be considered a reference method for Lp(a).

So really the main reason for establishing the reference method, and for any analyte, is to ensure that different laboratories in different parts of the world or different parts of the country when they measure your levels of Lp(a) will give you the same answer, so you can rest assured that you don’t have high risk of cardiovascular disease or you do, because small changes around the cutoff value basically can decide whether you are considered at risk or not for premature cardiovascular disease.

Bob Barrett:
Doctor, what is involved with the using of liquid chromatography-mass spectrometry for protein quantification?

Tomáš Vaisar:
So, traditional protein measurements are done using immunoassays, but immunoassays are imperfect for several reasons.  They often like concordance between different laboratories so the standardization between them is challenging, and this is partly for the reasons that different antigens were used to generate the antibodies required for the ELISA assays.

There are also issues with autoantibodies against the analyte itself, which block interaction with the detection antibodies, and there’s also issue with anti-reagent antibodies that hamper the ability to measure accurately by ELISA or by immunoassays.  Overall, it takes a lot of effort to develop a robust immunoassay.

Mass-spectrometry has a long history of quantifications of small molecules, and it has been now realized that the same can be applied to proteins; however, for the protein quantification, as it’s done today, a so-called bottom up approach is used where you have to take the protein and break it down to small pieces, peptides, that can be easily measured by the LCMS.  So, the protein quantification entails two parts: breaking down the protein to peptides, which are becoming the actual measurements of the protein abundance, and then the LCMS analysis itself.  So, this process is often referred to as proteolysis-aided mass spectrometry assay.  And for multiple reasons, it has a great potential to alleviate problems associated with immunoassays.  The simple principle of the approach is that the proteins are first split to the peptides with proteins commonly used trypsin and they are then analyzed by LCMS using methods called Selected Reaction Monitoring or Parallel Reaction Monitoring.

These methods have exquisite selectivity because you entail multiple stages of selection of your measurement of the signal you’re measuring; you first separate them on chromatography then you select specifically mass of the peptide in the first mass-spec analyzer, then you break down this ion into fragments, and you detect only specific fragments which are unique for the peptide and therefore, for the protein.

So, there’s multiple stages of selection, and therefore, the LCMS has exquisite selectivity and specificity. 

And then you can include stable isotope-labeled peptides, so peptides with a stable isotope label which are identical otherwise to the endogenous peptide you want to measure, and this corrects for a lot of variability associated with the LCMS assay so high level of precision can be achieved.

Most importantly, because this process directly targets the protein you want to measure, it’s free of the limitations associated with the antibody antigen interactions associated with the immunoassays. And lastly, but definitely not least importantly, this assay can be multiplexed, so you can either measure multiple peptides for a given protein to give you higher confidence in your assigned value as we did.  In addition, you can also multiplex and measure multiple proteins in the same analysis.  Luminex, for instance, uses multiple antibodies, but you’re limited to 68 as antibodies at the same time.  Here, you are limited only by the technical capability of the mass spectrometer.  So, you can measure, in reality, hundreds of proteins at the same time.

Bob Barrett:
What does it take to establish a quantitative LCMS method for proteins and lipoprotein(a) in particular?

Tomáš Vaisar:
So, despite its clear, advantages over immunoassays, as I mentioned, the LCMS assay is not without its own issues, and there are a few key steps that need to be carefully addressed. First because you’re not measuring the protein itself directly, but you have to first break it down with the protease to generate a lot of peptides. 50 kilodalton in protein can produce 50-60 peptides by trypsin digestion; you have to select the right peptides to represent the protein.  So the so-called proteotypic peptides needs to give you a strong signal in the mass spectrometer, be stable in its vivo so when you’re processing it after the formation it cannot be susceptible to further hydrolysis to oxidation, to modifications; it has to be very specific.

So, the peptide you select has to have no homologues in other proteins.  So the same sequence cannot be repeated in other proteins in the whole human genome.  There should not be any genetic mutations associated with this peptide.  This is one of the critical parts of the process, and for the Lp(a) this is especially challenging because, as I mentioned earlier, Lp(a) is highly polymorphic and there’s that Kringle IV type 2 repeated region, which can be highly variable in three and four repeats.  In addition, there’s a lot of homology within the sequence of Lp(a), so the Lp(a) size is between -- anywhere between 200 and 500 kilodalton, so it’s a huge protein.  And in theory should give you a large number of peptides; however, what we have found out when we started working on the Lp(a) assay that actually only about 25 or 30 peptides are detecting in the digestion of Lp(a).  Moreover, a lot of them, at least four or five, come from the repeated region and because they are repeated and in variable manner, they cannot be used for the assay.

So, really at the beginning of the assay, we had only six or eight peptides we could work with and select them for the assay.  In addition, there is the issue of calibration of the assay; that’s really strongly associated with the problem of the digestion.  So, the digestion itself, even though it seems to be pretty straightforward simple, is not so because not all peptides are formed quantitatively. And in fact, the digestion rarely goes to completion, basically giving you the same number of molecules of the peptide as you had of the protein, so the digestion is not complete.  So it is critical to establish reproducible digestion, and as I mentioned, the issue of calibration is this.

So there are two approaches, general approaches for the calibration.  One, which uses actual peptides you measure where you synthesize the peptide with a stable isotope.  You very carefully characterize it, purify it, and use it as a calibrator.  The key premise for this peptide-based calibration is that the digestion is quantitative. So for one molecule of protein, you end up with one molecule of peptide form, and this is, as I mentioned, really hard to achieve.
The advantage of this approach is that making small peptides is a routine -- as a routine procedure, they can be made in large quantities.

They can be highly purified, very well-characterized, so you can assign accurate value to the calibrator.  In contrast to the peptide-based calibration, more robust approach is to calibrate with a well-characterized recombinant or isolated protein.  So, you are using exactly the same molecule as you end up measuring and this protein undergoes the digestion procedure.  So, if your digestion is incomplete, it happens the same way to endogenous peptide as to your calibrator.  If you combine this approach with stable isotope labeled peptides as internal normalization, you can use so-called double isotope dilution approach, in which you use the stabilizers of peptides to normalize the LCMS part of the signal and the external calibrator to account for the variable digestion or incomplete or imperfect digestion, and you can achieve much better calibration than with the peptides.  Limitation is, of course, availability of the protein calibrator and establishing its traceability to SI units.

Bob Barrett:
So finally, Dr. Vaisar, what makes establishing a reference method for lipoprotein(a) so complex?

Tomáš Vaisar:
To make long story short, as I mentioned earlier Apo(a) is large protein with high variability in the sequence, so the selection of the peptides for the digestion was extremely challenging.  As I mentioned, we’ve ended up selecting three peptides for the Lp(a).  On top of that, making a recombinant protein as a calibrator was not easy and relied on the expertise of Marlys Koschinsky, one of the collaborators of the protein who carefully purified to more than 95% purity, assigned the concentration value to the calibrator with the help of Mark Lowenthal at NIST, and established a site unit tracer of a concentration for the standard.  We then had to establish that this recombinant protein responds in the assay the same way as endogenous Apo(a) protein that is bound to the LDL-like particle Apo(b) like particle and we were able to do that.  So, that was a challenge.

Another challenge for the method development was in the absence of the reference method, we had to establish somehow that the values we are getting are real.  So, we did a direct comparison to the monoclonal antibody ELISA with about 80 samples and established very exquisite agreement between the two assays.  Moreover, we were also able to establish the value for the proposed reference material to be essentially within 5% of the value assigned by the ELISA.  So, overall, the method we’ve developed has all the characteristics of a reference method.  So, what’s now left is to fully validate the method and submit it to JCTLM for approval so that it can be actually used as a reference method.

In addition to being a reference method, it can be –  because the LCMS is easily transferable from lab to lab – this method can become generally useful for standardization between different labs.  Therefore, it can be widely used for clinical practice, eventually in absence of commercial ELISA.

Bob Barrett:
That was Dr. Tomáš Vaisar.  He is a Research Professor of Medicine at UW Medicine Diabetes Institute at the University of Washington in Seattle.  He has been our guest in this podcast on “Development of an LC Mass-Spec Reference Method Measurement of Lipoprotein(a).”  That study appeared in the March 2021 issue of Clinical Chemistry.

I’m Bob Barrett. Thanks for listening.