Listen to the Clinical Chemistry Podcast



Article

Trevor A. McGrath, et al. Overinterpretation of Research Findings: Evidence of “Spin” in Systematic Reviews of Diagnostic Accuracy Studies Clin Chem 2017;63:1353-62.

Guest

Dr. Matthew McInnes is a radiologist at the Ottawa Hospital and a clinician investigator at the OHRI Clinical Epidemiology program. Dr. McInnes is also a Deputy Editor for the Journal of Magnetic Resonance Imaging and Associate Editor of Radiology.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Systematic reviews of available scientific literature are a key component in developing evidence based guidelines and policies. If this data or its interpretations are flawed, so are the resulting medical practice recommendations. When applied to diagnostic accuracy studies, this can lead to negative impacts on patients in the form of both missed diagnosis and over diagnosis.

An article in the August 2017 issue of Clinical Chemistry provides insight into the magnitude of how often “spin” or over interpretation occurs in systematic reviews on diagnostic accuracy studies. We are joined by Dr. Matthew McInnes, the corresponding author of this report. He is a radiologist in the division of abdominal and chest radiology in the Department of Medical Imaging at the Ottawa Hospital and is a clinician investigator at the OHRI Clinical Epidemiology program. Dr. McInnes is also a Deputy Editor for the Journal of Magnetic Resonance Imaging and Associate Editor of Radiology.

So doctor, why should anyone but epidemiologists care about spin or over interpretation in biomedical literature?

Matthew McInnes:
Well spin is a phenomenon where authors of studies are overly optimistic about their study results. So they’ll say this drug works really when the results don’t actually necessarily reflect that. And we know that that’s a phenomenon very common in intervention trials for medication. But what we don’t know is whether it’s common in diagnostic accuracy systematic reviews. And for areas like clinical chemistry or radiology, which is my specialty, diagnostic accuracy research is one of the most common forms of research.

So people are misrepresenting the results of their diagnostic accuracy studies, it can be a problem, and the reason that it’s a problem is that if -- let’s say you are the United States government reading a systematic review regarding the accuracy of some test, and you read it and you think, “Well, this has great accuracy. Let’s fund it, implement it,” that can be problematic if they’re basing that decision on data that’s positively spun and may not reflect the actual accuracy of the test.

In short, if they’re thinking a test is great, they implement it. It may cost money because it’s not as accurate as it is, so it’s money wasted. But more importantly, it could harm patients, because let’s say you think a test is very good at excluding a disease like cancer, but it’s actually not as good as the people using it to think, then you get misdiagnoses of cancer and obviously that’s problematic for the patients who are undergoing that test.

Bob Barrett:
What about medical interventions, how might overly optimistic interpretations of medical interventions cause harm?

Matthew McInnes:
Well, similarly, so medical interventions, we classically think of that as type of surgery or a medication that would positively impact patients. So if you’re presenting the results regarding a study or a systematic review of a medical intervention, let’s say it’s surgery for colon cancer, and you’re spinning the results saying, “Oh, it’s a great intervention,” but you may not be reflecting important side effects like people die during the surgery. Then when that is implemented potentially in practice, they’ll underestimate the number of deaths or important side effects from that intervention and the outcomes wouldn’t be as good as they thought they would, so you’ll have more patient deaths, more side effects. Those types of things that can really blunt how well you think an intervention is going to work.

Same thing with medications, if you think a medication is great for treating high blood pressure but it causes important side effects like bleeding in the brain that weren’t well reported or weren’t reflected in the study report, that again can have really negative impacts on patients who are treated with these medications.

Bob Barrett:
So why does spin and over interpretation occur?

Matthew McInnes:
Well that’s a really complex question. So part of it may be that people and physicians or researchers tend to be inherently optimistic, and really as a physician, you want your test or intervention to work. If I’m a radiologist and I’m studying CT, I’m probably inherently biased towards thinking that, “Yes, CT is great because that’s what I use every day,” and we may just unconsciously present the results in a way that’s more positive than is warranted. I don’t think it’s often a malicious thing like people actually want to say it is better than it is, but it may just be a subconscious way of presenting the results that don’t reflect some of the problems.

Let’s say I do a study, a systematic review of CT for appendicitis and it shows an accuracy of 85%, but in children, it’s not as good. Well, I may say in my conclusion that, “Yes, this is a great test for appendicitis.” But I may forget to say it’s not great for kids, and then when it’s implemented and practiced, that group of patients where it may not be as effective, may not be recognized.

Again, it’s often an error of omission or lack of communicating important nuance regarding tests, because sometimes there are certain subgroups of patients for which a test or an intervention might not be effective, or sometimes there are problems with the studies you’re using to make your conclusions that might impact that, and often, it’s poor communication exercise rather than something that I think is very deliberate and trying to actively misrepresent. So I don’t think the intent is usually there to misrepresent, but the effect is, that it is. And I think we need to educate authors, readers, editors, peer reviewers on what spin is, and if we recognize what it is, perhaps we can better prevent it.

Bob Barrett:
So it’s not just ego.

Matthew McInnes:
I mean without going into the head of every author who has written these systematic reviews, I really don’t think people often intentionally want to misrepresent results. I think it’s, again, inherent optimism, subconscious positivity. And myself, as an author of these reviews, I’ve had to be introspective and look back at some of my reports and realize that perhaps I’m not completely free from spin in my own reports, so it certainly changed the way I approach my own reporting of research.

Again, awareness is the first step and education and outreach to prevent this or minimize this, I think, is important. So this is a really a first step to make people aware of what spin is, how common is it, what are the most common types of spin that we see in these test accuracy reviews. And once we’re aware of that, perhaps we can go forward and minimize it or reduce it.

Bob Barrett:
Well, your study found around 70% of systematic reviews of diagnostic accuracy studies contained at least one actual form of over interpretation. What do we do? What strategies can be implemented to reduce this frequency?

Matthew McInnes:
That’s a really good question. So I think, as I said in the previous question, I think understanding what spin is and that it’s very common in our test accuracy systematic reviews as step one. But certainly awareness is not a solution in and of itself. What we do know is that the most common forms of spin that we identified are saying that a test is good when maybe it’s not necessarily as good as we think.

Saying a test is good but not recognizing that the studies you use, may be at high risk of bias, or saying a test is good and not realizing that a test may not be equally good for all groups of patients. Those are the three most common forms. So I think if we start with those three very common forms, and editors like Dr. Rifai at Clinical Chemistry or Dr. Kressler at Radiology, that publish a lot of the systematic reviews, can be aware of that and maybe educate the reviewers and authors on how to reduce that, that’s step one.

I think the second step is to have a reporting guideline that’s specific for diagnostic test accuracy systematic reviews. There are a lot of nuance and tricky things with this type of systematic review that really needs to be understood and reported correctly.

In my group, some of the authors on this paper are working on such a guideline which should be released later this year. So hopefully a guideline on how to report these systematic reviews will also provide a framework to reducing this type of behavior.

Bob Barrett:
Well finally doctor, will similar strategies work for minimizing misinterpretation in other types of biomedical literature reports?

Matthew McInnes:
Well that’s a really good question. We know that spin or over interpretation is common in all research. We’re only recognizing it now in our diagnostic accuracy systematic reviews. There’s been some good work done on randomized control trials and systematic reviews of these trials, and certainly, reporting guidelines and standards for reporting of those types of studies and systematic reviews have shown some modest benefit in improving reporting.

I’m not sure anyone’s done an assessment to see whether pre and post those interventions spin has actually reduced. It’s a relatively new phenomenon in research and that we’re only starting to evaluate its presence in the last, let’s say five years or so. So no one’s really done any follow up studies to see, “Has this changed over time? Has the frequency of spin decreased over time?” And I think those are some important things to look at, to see, are these interventions that we’re putting in place helping to reduce spin, specifically, and certainly, our group is looking forward to doing some follow up studies, following interventions such as specific reporting guidelines for test accuracy systematic reviews.

Bob Barrett:
Dr. Matthew McInnes is a radiologist in the division of abdominal and chest radiology in the Department of Medical Imaging at the Ottawa Hospital, and is a clinician investigator at the OHRI Clinical Epidemiology program. He’s been our guest in this podcast from Clinical Chemistry. I’m Bob Barrett. Thanks for listening.