Listen to the Clinical Chemistry Podcast
Clark Henderson et al. Measurement by a Novel LC-MS/MS Methodology Reveals Similar Serum Concentrations of Vitamin D Binding Protein in Blacks and Whites. Clin Chem 2015;62 179 187
Dr. Andrew Hoofnagle is Associate Professor and the Head of the Division of Clinical Chemistry in the Department of Laboratory Medicine at the University of Washington in Seattle.
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.
There are now numerous epidemiological studies that have demonstrated that low plasma concentrations of vitamin D are associated with not only bone related disorders, but with a wide variety of adverse health outcomes. Recently it has been suggested that bioavailable or free vitamin D may be a better marker for vitamin D status than total 25-hydroxyvitamin D itself. Those free vitamin D concentrations are a function of the vitamin itself and the concentration and isoform of vitamin D-binding globulin, a protein synthesized in the liver and has a strong affinity for vitamin D and its analogs.
The January 2016 issue of Clinical Chemistry, a special issue devoted to mass spectrometry and the clinical laboratory, published a paper describing a new method to quantify vitamin D-binding globulin using LC-tandem mass spec that could also identify the haplotypes and the isoforms of that protein. Dr. Andrew Hoofnagle is the senior author of that paper and he joins us in this podcast. Dr. Hoofnagle is Associate Professor and Head of the Division of Clinical Chemistry in the Department of Laboratory Medicine at the University of Washington in Seattle.
So doctor, why did your group decide to develop a method to measure vitamin D-binding globulin by liquid chromatography-tandem mass spectrometry?
For a long time it has been recognized that blacks and whites have different metabolism, specifically bone metabolism. Blacks have significantly healthier bones than whites do, throughout life. And one of the proposed reasons for that was differences in vitamin D biology, but there is a paradox, because black-Americans actually have much lower concentrations of 25-hydroxyvitamin D, which is the vitamin D metabolite that we use to evaluate vitamin D sufficiency.
So, on paper, it would appear that blacks are actually vitamin D deficient when in fact they have significantly better bone health than their white- American counterparts, who have higher concentrations of 25-hydroxyvitamin D. This paradox is actually accompanied by another paradox which is, as Caucasian-Americans and other lighter-skinned Americans, as their concentrations of 25-hydroxyvitamin D get lower, their risk for cardiovascular disease gets higher. And as their concentration of 25-hydroxyvitamin D gets higher, their risk for cardiovascular disease gets lower.
But our group, in collaboration with others here at the University of Washington, published a few years ago demonstrating—in JAMA—a paper that shows that in blacks versus whites, whites do have this decreased risk for cardiovascular disease as the concentration of 25-hydroxyvitamin D goes up, but in black-Americans, there was no relationship whatsoever with 25-hydroxyvitamin D and cardiovascular disease.
So, both in terms of bone health, as well as in cardiovascular disease risk, blacks and whites behave very differently with respect to vitamin D metabolism. And so, over the past couple of years people have proposed that it's not the total amount of 25-hydroxyvitamin D that’s important, but instead it's the amount of 25-hydroxyvitamin D that's not bound to its binding protein, which is also called vitamin D-binding globulin. In addition to the vitamin D not bound to the binding protein, there are other binding proteins, including albumin, that more loosely grab on to 25-hydroxyvitamin D and plasma, helping it to remain stable and soluble.
So the hypothesis was that perhaps Blacks have better bone health and are less susceptible to cardiovascular disease risk due to vitamin D deficiency. Perhaps that's due to differences in the amount of free or bioavailable vitamin D in serum. And to measure a free or bioavailable vitamin D, it’s very difficult to measure it directly; however, there have been equations in the literature for a long time that use albumin 25-hydroxyvitamin D, the concentration of vitamin D-binding globulin, the plasma, and some affinity constants that were developed in vitro a long time ago. And using this approach, there was a paper that was published in the New England Journal of Medicine a couple of years ago by Ravi Thadhani and his group in the Harvard system, where they used a monoclonal immunoassay to quantify the amount of vitamin D-binding globulin in a Community Cohort, and they used that and other data to calculate the free and the bioavailable vitamin D.
And what they noticed right off the bat was that the amount of vitamin D-binding globulin was lower in Blacks than it was in Whites. And with a lower binding protein concentration it actually equaled out, reduced the effect of the low 25-hydroxyvitamin D concentration such that blacks and whites have the same amount of bioavailable vitamin D. And the entire field was excited by this because now we had an explanation for why bone health in Blacks is better than in whites, because they have a fundamentally different amount of binding globulin and therefore comparable amounts of free and bioavailable vitamin D.
The problem is, the amount of variability in the vitamin D-binding globulin concentration in people’s serum was actually strongly predicted by the genotype or the polymorphism of the vitamin D- binding globulin present in the people’s genotype. And that scared us, because that implied that the amino acid changes in the protein itself were affecting how the protein was detected in a immunoassay, which is a pretty straightforward hypothesis to have, but something that’s a little more difficult to prove really.
And so we really raised the question and said “what?” We were startled, but the field was excited, and so we decided to make an LC-MS assay. And to make the LC-MS assay, by its very nature we destroy all of the protein into individual peptides and we can quantify one or more peptides in the protein, and it should be independent of the polymorphism itself. We are not relying on the epitope or the three dimensional structure of the protein; we are really boiling it down to just the primary amount of amino acid sequence.
So we had a hypothesis that we could develop an assay by LC-MS, use it to compare it against the monoclonal immunoassay and demonstrate that the monoclonal immunoassay is actually incorrect in the cases where the polymorphisms are present and the amino acid structure is different.
I’m sorry, right now I am picturing an entire laboratory going, “what?” [laughter] Andrew N. Hoofnagle: We all did it; it was a great Journal Club.
Talk about the steps required to make a quantitative protein assay by liquid chromatography-tandem mass spectrometry?
As I mentioned, the peptides that we need to quantify the protein that are generated after digestion or proteolytic digestion, we generally use trypsin. So we need to find the peptides that are generated by trypsin, tryptic peptides that are derived from the protein of interest that are detectable by liquid chromatography-tandem mass spectrometry.
And there are many ways that one can do that, and one that has been touted in the research literature for a very long time is to use prediction algorithms, or what they call shotgun proteomics databases. In other words, if these peptides have been seen in other experiments, then they are going to be fantastic for targeted, quantitative, proteomics assays using LC-MS as the clinical laboratory would be interested in quantifying proteins. That is certainly one way to do it.
We decided to use a more empiric approach. In other words, we took purified protein, either protein purified from human beings, or protein that was developed recombinantly and grown in bacteria. And we used both of those sources of protein to try to identify the peptides that were going to be most robust in the quantitative assay that we were developing.
Some of the things that we want to look for in a peptide -- one of the good peptide characteristics that might be useful in a protein assay by LC-MS— includes the length of the peptide. Peptides of a certain length are just the right mass, they chromatograph well, and they fly, they ionize, in the mass spectrometer well. Other things we might be interested in are the amino acid makeup of the peptide. There are some amino acids that are less stable than others. The stability of the peptide itself; so we don’t want the peptide to be lost to the sides of the vessel that you are working in, so looking at peptides stability is important.
The precision of the measurement; in other words, if I inject the sample over and over again, do I get the same peak area over and over again? So what is the precision of the LC-MS analysis of the peptide? Finally, we can look at the peak shape of each of the chromatographic peaks that correspond to the different peptides. Peptides that chromatograph better are going to be more precise and better behaved than assays down the line.
And the way that we evaluated all of these characteristics empirically was using a software package that was developed here in the University of Washington, Department of Genome Sciences, called Skyline. And the MacCoss lab has been building this vendor agnostic software platform for many years. It will take any raw data from any of the mass spectrometers and allow it to be processed all in the same software platform, which is fantastic.
And so we were able to bring in the raw data from multiple instruments to really ask the question, what are the best peptides empirically so that when we went out and spent the money on the reagents, which in this case are the isotope labeled internal standard peptides, when we went out and purchased those, we were quite confident that things are going to work well.
The next steps are to optimize the digestion and the LC-MS method such that these tryptic peptides from the protein of interest that we are using are going to be a good surrogate for the protein concentration. Now, in addition to the protein concentration one thing that our LC-MS assay was able to do was to look for the specific polymorphisms that are encoded by the genome. Because the genome is encoding different amino acids, there are actually different peptides that we can try to identify by LC-MS.
And this isn’t a new idea. This was actually first demonstrated clinically by Bob Bergen’s lab in Mayo many years ago, also published in Clinical Chemistry, where we can look for the specific peptides that are defined by those polymorphisms or those mutations, if you will, and by looking at that, if that peptide is present, it means that the amino acid is different and the polymorphism is present.
So we actually predict the genotype of the individual by looking for specific polymorphic peptides. So in that respect we can't really be empiric, we can’t really pick the peptides that might be of interest or might be useful to quantify the protein concentration, we are really looking for specific peptides. And in that respect, then we have to optimize digestion in the LS-MS to try to get the best sensitivity that we can for very specific peptides, so sometimes unfortunately we don't really have a choice.
But once you have identified the peptides and you have purchased the internal standard labeled peptides and you have optimized your digestion in your liquid chromatography-tandem mass spectrometry, you are really ready to develop an assay and to turn it into a production assay.
And how did you validate that this new assay was giving good results?
Well, back in 2014, a man named Russ Grant, who works at the LabCorp, and I, published a paper in Clinical Chemistry, it was an opinion piece, where we laid out what we thought would be the most important experiments for people that are interested in publishing a novel biomarker.
In other words, here is a protein that we think is going to be predictive of disease; either diagnostically, prognostically, or helpful in therapeutic management. People are publishing new biomarkers all the time, but they don't go through the steps to demonstrate to other laboratories that those data are going to be reproducible over and over and over again or from lab to lab, et cetera.
So we set out in 2014 to put together the experiments that we thought would provide a sufficient nugget of evidence that people should feel comfortable to try to adopt that assay for their own laboratories. And the things that we really focused on -- I mean, the most important hallmarks of good biomarkers are if they are precise and if they are linear. So precision and linearity are defined pretty simply by just a couple of experiments that take less than a week in any laboratory to perform.
The other things that we are very interested in are interferences. If you are measuring a sample from a human, humans are strange creatures, we have got all sorts of interfering substances; everything from lipids, we have very different lipid makeup from human to human; the things that we eat are very different, lots of different pharmaceutical drugs, and I mean the list goes on and on. So we need to think very carefully about what interferences we might encounter, especially in sick people, and test those interferences to see if the assay can withstand that kind of interference. And finally, stability; there's nothing more frustrating than developing an assay using a peptide that somebody is excited about and it has gone after an hour sitting on the instrument.
And so just very simple series of experiments that we can do to demonstrate that this biomarker could -- and the way that we are measuring this biomarker could be useful in clinical research or maybe even in the future in clinical care. Well, the reviewers of the paper actually asked us for a couple of more experiments, which I think really contributed quite substantially. Most importantly, we were looking for spike recovery; we were able to demonstrate that if we spiked in exogenous vitamin D-binding globulin, we could get back what we were expecting to get with an error.
The most important thing that Russ and I put into that paper was the idea or the concept of transparency. And again, anybody can publish a paper and say, look what I did, but to provide the data, the raw data that is generated during the development and the validation of the assay, we ask that that be provided to readers so that they could themselves evaluate the raw data and decide if this assay was worth launching in their own laboratory or not.
Again, Skyline from the MacCoss laboratory is a fabulous platform. It’s basically the Adobe Reader of mass spec data. You are able to download the raw data, look at the chromatograms and do the calculations all again, and see and make sure that the data that the investigators provide is actually worth sinking your teeth into in the future.
And so this was the first example of us trying to apply these guidelines that we had developed a little more than a year ago. This was the first opportunity that we had to apply those guidelines on to a new assay that we think is going to be important for clinical research in the next couple of years.
Doctor, what experiments did you do to test your hypothesis about the immunoassay for vitamin D- binding globulin?
So we were very fortunate and lucky to collaborate with a group from University of Minnesota led by Pam Lutsey. She was accompanied by one of her collaborators from the Johns Hopkins University, Liz Selvin, and we were able to utilize some samples from the Atherosclerosis Risk in Communities Study, or the ARIC Study.
This is a population of normal community dwelling adults who have been followed for an extended period of time to ask the question, what is the risk of cardiovascular disease as we continue to get older? And we had the opportunity to draw from a sub- cohort of this population. We had at our disposal samples from 187 people.
And what we were able to do was to compare the results that we got from the monoclonal immunoassay, the same one that was published in the New England Journal of Medicine article back in 2013, use the same assay on those samples, and then apply this new LC-MS assay to the same samples.
And what we found was that first, we were able to compare individual results to the monoclonal immunoassay. We were able to look at the mean by genotype to a polyclonal immunoassay that had been published in Clinical Chemistry back in 2001. And what we were able to do was to demonstrate that the genotype predicted almost all of the variability in the measurements by the monoclonal immunoassay and that it predicted very little of the variability when measured by the LC-MS assay.
And then, when you looked at the mean concentrations that were measured per genotype by the LC-MS assay, they lined up very, very closely with the mean concentrations by genotype as measured by the polyclonal immunoassay. Now, this isn’t perfect data. It doesn’t demonstrate completely that we proved our hypothesis. It does strongly suggest that the phenotype, the polymorphisms, are strongly influencing the concentration of vitamin D-binding globulin as measured by the monoclonal immunoassay.
One of the problems that we have is that we really need to demonstrate that the LC-MS assay quantitatively recovers all of the polymorphisms identically, and we need purified protein with each of the polymorphisms to do that. So we are working closely with a collaborator to try to generate those reagents, and so hopefully we will have answers in the coming months.
But we did do two things that we thought would help calm people’s nerves, including our own, and one was to look at the crystal structure of vitamin D- binding globulin. We found that the peptides that we were using to quantify the amount of protein in each sample is very far away, more than halfway across the protein from the polymorphic peptides, so that was a piece of evidence, number one.
And the other was to look at the ratio of the different quantification peptides to each other, and presumably, if each of the peptides were liberated to the same extent, or at the same rate as other peptides in the protein, that the ratio should be very constant as we go from genotype to genotype, and that’s in fact exactly what we saw. So from everything that we could tell the new LC-MS assay is agnostic to the polymorphism and is giving very reproducible and reliable results, regardless of the genotype. And as an added bonus, we can actually burn the genotype in the same assay in which we are quantifying the protein.
Well, finally, let’s look ahead, what’s next for this assay and for other protein assays performed by liquid chromatography-tandem mass spectrometry?
Well, I think for this assay the most important thing is to demonstrate once and for all that this LC-MS assay is free of interference from the polymorphisms, and I think we are on our way. But just in general, I think protein assays and immunoassays for proteins in general, we need to really strongly think about what this means for patient care.
We know, and we have known for many years, that immunoassays are affected by antibodies that human beings make completely nonspecifically, and in some cases those antibodies might bind to the epitopes that are being used in the immunoassay to try to detect the protein, which means we will get a falsely negative or falsely low result.
Another case is, the antibodies that we make non-specifically might recognize the reagent antibodies used in the immunoassay and so give us a false positive result. And those are really scary, because false positive results, especially in cancer tumor markers, end up leading to some sort of treatment, whether it’s surgery or radiation therapy or even chemotherapy, and so false positive results are truly devastating in many people’s lives. And so the whole process of taking a sample and digesting it with trypsin or some other proteolytic enzyme to generate peptides completely destroys those antibodies, which is very exciting. So we can remove the interference once and for all that has been plaguing clinical care for a very long time.
In addition, the results that we get from different immunoassays are different from manufacturer to manufacturer, and so we proposed a long time ago that we would be able to use LC-MS to standardize assays to get the same result from lab, to lab, to lab. And we have published now an article a couple of years ago in Clinical Chemistry showing that measurement of IGF-1 could be the same from lab to lab, even though we are all using different instruments, different chromatographs, different mass spectrometers, didn’t matter, we could get comparable results from lab to lab.
And actually in this issue of Clinical Chemistry there is a paper from Brian Netzel’s group at Mayo, who in collaboration with us and LabCorp and ARUP in Salt Lake City, showed that we are not commercial in vitro diagnostic manufacturers, but we are able to calibrate our assays and get comparable results from lab to lab.
We are all using different digestion conditions and different reagents and different calibration techniques, and we are closer in agreement from laboratory to laboratory using this LC-MS approach compared to the commercially available immunoassays that are out there. So I think the future of measuring proteins by mass spectrometry is wide open. We need to develop the techniques and the reagents to help take care of patients better.
That was Dr. Andrew Hoofnagle. He is Associate Professor and the Head of the Division of Clinical Chemistry in the Department of Laboratory Medicine at the University of Washington in Seattle. He has been our guest in this podcast from Clinical Chemistry on vitamin D and its binding globulin. His paper appeared in the January 2016 issue of Clinical Chemistry, a special issue devoted to mass spectrometry and the clinical laboratory. I am Bob Barrett. Thanks for listening!