Listen to the Clinical Chemistry Podcast



Article

Michael V. Holmes and George Davey Smith. Can Mendelian Randomization Shift into Reverse Gear? Clin Chem 2019;65:363-6.

Guest

Dr. Michael Holmes is from the University of Oxford and Dr. George Davey Smith is from the University of Bristol, both in the United Kingdom.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Mendelian randomization is a genetic epidemiological approach that is made substantial inroads into our understanding of the causes and consequences of disease, but can that same technique be run in reverse? In the March 2019 issue of Clinical Chemistry, a paper investigated potential blood markers of early chronic kidney disease which are caused by loss of kidney function, using an innovative reverse Mendelian randomization approach.

That same issue included an editorial that was authored by Dr. Michael Holmes from the University of Oxford and by Dr. George Davey Smith from the University of Bristol, both from the United Kingdom and both are our guests in this podcast.

So, Dr. Davey Smith, we’ll start with you. This editorial covers some interesting topics. Before we discuss these, what can you tell us about Mendelian randomization?

Davey Smith:
So, Mendelian randomization is an epidemiological approach which uses molecular genetic variation as an indicator of a modifiable exposure as a way of getting better evidence on the cause or effect of that exposure. Now, that might sound strange and it’s, perhaps, best to explain it through sort of worked examples.

In epidemiology, we will measure modifiable factors like body mass index or your alcohol consumption or your C-reactive protein level, whatever it is, and then follow people up and see how that exposure predicts the outcome. But there’s obvious problems doing that in that your outcome, your developing disease process like arteriosclerosis, that might influence your exposure, so it will influence your C-reactive protein levels. Or, if your doctor has told you you’ve got developing coronary disease, you might cut down on your alcohol consumption or your smoking. So, that means the causation is in reverse direction. It’s from your developing disease to what you think of as an exposure.

And then, secondly, many of the exposures will be confounded. There will be other factors which underly the exposure, which might also affect your outcome directly, not through the exposure. So, for example, smoking and obesity, both increase C-reactive protein level and those will also obviously influence coronary heart disease. So, you end up in a situation where it appears that CRP is influencing coronary heart disease when it’s actually confounded.

Now, if you use genetic variants, let’s say, a genetic variant related to C-reactive protein levels, and of course, there are such variants in the [inaudible] of the CRP gene. Then, those genetic variants will obviously not be influenced by your developing disease process. And also, they will be confounded by these behavioral, socioeconomic, lifestyle factors which will confound your directly measured exposure. So, they get around two of their major problems of observational epidemiology when you’re trying to infer causal effects. And this has been used widely for looking at biomarkers and whether they have causal effects and outcomes. For example, with HDL cholesterol, Mendelian randomization studies were some of the first evidence that came out that HDL cholesterol did not actually protect against coronary heart disease despite the fact that it was being called, “good cholesterol” for decades. That turned out not to be a causal effect. It wasn’t protective.

So, that’s the basic approach. It’s using genetic variance as proxies for modifiable exposures, so you can get better evidence on whether actually intervening on that exposure is likely to improve the outcome.

Bob Barrett:
So, Dr. Holmes, are there any examples where Mendelian randomization provides us with findings that would’ve been difficult to obtain from other study designs?

Michael Holmes:
Yes. The literature now is sort of burgeoning with these types of Mendelian randomization studies. And there are several that are notable for the fact that previous studies from, say, observational design suggested that there might be a relationship, but an actual clinical trial to elucidate whether that represents a cause and effect relationship have been challenging to undertake.

Bob Barrett:
And why is that important?

Michael Holmes:
When we come to think about what we do to prevent heart disease from happening, obviously, we want to focus our attention to those exposures, or risk factors, that have a causal relationship with disease, the point being that as George just mentioned, if we focus our effort on modifying biomarkers that are not causing disease, then that won’t lead to a reduction in disease occurrence. Whereas, in contrast, if we focus efforts on those that are reliably indicated to have a causal relationship, then that can have a profound effect on lowering the incidence of disease.

Bob Barrett:
Well, what impact have they had on the way we think about health and disease?

Michael Holmes:
So, to take a couple of examples, there was a large randomized clinical trial of a lifestyle intervention that was primarily designed to lower weight. And they randomized about 5,000 individuals to an intervention that lead to a modest reduction in body mass index, which is a measure of adiposity.

And despite this intervention occurring which did effectively lower BMI, and despite a follow up for almost 10 years, the trial was halted with really insufficient evidence to detect an effect. And therefore, from the clinical trial evidence, it wasn’t clear whether or not body mass index was related to heart disease in a cause and effect manner. But what we now know from human genetics from multiple different studies and different populations and different datasets is that, adiposity as measured by body mass index is strongly and robustly linked in a cause and effect manner and almost certainly to a high risk of heart disease.

So, while the clinical trial couldn’t show this robustly, human genetics has been able to show this. And this is important because we have a what’s been called a pandemic of obesity, and it will—so it almost empowers us at the individual level to realize that actually, “Okay, so, yes, it is challenging to modify our weight if weight is elevated. But in doing so, we can, in effect, lower our risk of heart disease.” So, it can be actionable information which can be acquired from such human genetic studies.

Bob Barrett:
Doctors, your editorial talks about Mendelian randomization shifting into reverse gear. Now, what do you mean by that?

Michael Holmes:
So, conventionally, Mendelian randomization is used to assess whether an exposure causes a disease. Our editorial was linked to a study by Gui Paré and colleagues which took a different approach, and they used genetic variants linked to a disease or pre-disease, so a mark of kidney function, and then used that to explore what biomarkers might be linked to that genetic risk score for renal disease, in an attempt to first elucidate whether that might identify new causal biomarkers related to kidney disease or whether it might identify biomarkers that might be useful for disease prediction.

Bob Barrett:
And can you give some examples of these scenarios?

Michael Holmes:
So, in the editorial, we provide a take, or sort of first attempt or naive interpretation of a take, on how we might interpret biomarkers, associations of a genetic risk score comprising single nucleotide polymorphisms linked to a disease state. And we have seven scenarios and we’ll talk through three of these for the purposes of the podcast.

The first one is, say we take a variant linked to heart disease such as PCSK9, which is now a drug target for heart disease. We will identify that PCSK9 is linked, not only to heart disease, but also to blood concentrations of lower density lipoprotein cholesterol, which is a biomarker that actually causes heart disease. So, in this scenario, we have a genetic variant which is linked to heart disease and is also strongly linked to the biomarker that causes the disease. So, that’s one of the scenarios.

Our second scenario is that we have a genetic risk score, again, for heart disease, and this time, we’re identifying association of that genetic risk score with statin use. And under a naive interpretation, we might say, “Okay. So, here’s a higher genetic risk score linked to high use of statins.” And I naively think that statins cause heart disease, but actually, that would be the wrong interpretation because in this scenario, because most patients who develop heart disease are treated with statins, the association has arisen by reverse causality.

So, that’s the second scenario. And the third scenario is one where the association of genetic risk score with a biomarker has arisen through what we call horizontal pleiotropy, or pleiotropy of the biomarkers. So, for example, just as with the PCSK9 example I mentioned earlier, we might have a genetic variant that links to Interleukin-6 receptor. And we know that in the inflammation pathway, not all inflammation quotients are linked to heart disease. As George mentioned a couple of minutes ago, for example, C-reactive protein is not linked to heart disease in a cause and effect manner.

But the association of a genetic risk score of for heart disease might be linked to C-reactive protein through, for example, these Interleukin 6 pathways through these pleiotropic mechanisms. So, really, the gist of what we’re trying to demonstrate is that, biomarker associations of genetic risk scores for disease have multiple potential interpretations and the interpretation has an implication on whether that biomarker is important for etiology, whether it’s important for prediction, whether that might have arisen as a means of how the study design was conducted.

Bob Barrett:
Doctors, could you talk about the broader relevance of the findings by Paré and his colleagues?

Michael Holmes:
So, I would say that the discoveries from Guillaume and associates’ studies are valuable, not only for deciphering genetic architecture of disease, but also for elucidating pathways which might, themselves, lead to disease or which might arise from the disease process itself.

And then, when we identify biomarker associations of these genetic risk scores, we need to really think about the context of these findings. And as I said before, that itself has an implication of what potential roles those associations may have when we think in a longer term about how we might try and relate those finding into clinical practice.

Bob Barrett:
Well, finally, let’s look ahead. Where do you see the future of using genetics to improve our understanding of the way the disease occurs and how might that information be useful to the way we approach disease treatment and prevention?

Michael Holmes:
So, what has become available to scientists over the past few years really are plethora of these large scale publicly available datasets, which include large numbers of individuals who have genome-wide genotyping, who have electronic health records, who often have blood and urine samples, which can then be measured for various biomarkers. And the combination of these types of data mean that we are really going to be increasingly well-placed to translate genetic discoveries into improving how we care for patients, how we prevent disease occurring, and really trying to do our best to minimize the burden that arises from disease.

So, for example, understanding how disease originates and using that information not only to prevent disease, but also to discover new drug targets, identifying which individuals in the population are particularly at risk of developing certain types of disease and trying to tailor our approach to disease prevention to just those individuals. And a broader understanding of what sort of interactions might exist between what we do and our susceptibility to developing disease.

So, with all of these things coming together, we really hopefully enter in an era where we can use these large biobanks to prevent disease from occurring by combining large scale genotyping datasets with electronic health records and detailed measurements.

Bob Barrett:
That was Dr. Michael Holmes from the University of Oxford. He was joined by Dr. George Davey Smith from the University of Bristol, both from the United Kingdom. They have been our guests in this podcast on reverse Mendelian randomization. Their editorial appears in the March 2019 issue of Clinical Chemistry as well as the original paper applying the technique to chronic kidney disease. I’m Bob Barrett. Thanks for listening!