Listen to the Clinical Chemistry Podcast



Article

D.A. Korevaar, E.A. Ochodo, P.M.M. Bossuyt, and L. Hooft. Publication and Reporting of Test Accuracy Studies Registered in ClinicalTrials.gov. Clin Chem 2014;60:651-659.

Guest

Dr. Daniel Korevaar is from the Department of Clinical Epidemiology, Biostatistics, and Bioinformatics at the Academic Medical Center in Amsterdam.



Transcript

[Download pdf]

Bob Barrett:
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.

Over the past several years, investigations have shown that many clinical studies remain unpublished and even among published studies the results are often presented selectively. So far most of these investigations have targeted randomized controlled trials. Much less attention has been paid to studies that estimate the accuracy of diagnostic or prognostic medical tests or biomarkers. In the April 2014 issue of Clinical Chemistry, Daniel Korevaar and his colleagues at the Department of Clinical Epidemiology, Biostatistics and Bioinformatics at the Academic Medical Center in Amsterdam investigated whether non-publication and selective reporting also appear among these tests’ accuracy studies.

Dr. Korevaar is our guest in today’s podcast. Doctor, in previous studies, several methods have been used to investigate failure to publish. Studies have, for example, investigated publication rates among randomized control trials approved by ethical committees, among abstracts presented at scientific conferences or among trials registered in a clinical trial registry. Which methods did you use for your examination of medical tests and biomarkers?

Daniel Korevaar:
Well, we use ClinicalTrials.gov to investigate non-publication and selective reporting among test accuracy studies. Since 2005, the International Committee of Medical Journal Editors (ICMJE) requires researchers to register essential information about the design of their clinical trial in an openly accessible trial register, and ClinicalTrials.gov is an example of such a register.

Information that should be registered includes for example, the trials’ primary and secondary outcomes, information on the methodology of the study and patients’ inclusion and exclusion criteria. And the ICMJE has declared to only consider clinical trials for publication, if the protocol was registered before study initiation. These conditions apply to “any research study that prospectively assigned human participants to health related interventions, to evaluate the effects on health outcomes.”

Bob Barrett:
Test accuracy studies often do not directly evaluate effects on health outcomes. Does the International Committee of Medical Journal Editors require studies of test accuracy to be registered?

Daniel Korevaar:
Well, no test accuracy studies usually only indirectly contribute to the effects on health outcomes. So no, they do not have to be registered at this point by the ICMJE. However, even without an official requirement many of the existing trial registries already contain test accuracy studies, so this gave us the opportunity to investigate failure to publish and selective reporting in this setting.

So what we did was following: we searched ClinincalTrials.gov for test accuracy studies that have been registered between 2006 and 2010 and that have been completed before halfway 2011 and then last year in 2013 so that was at least one and a half year after the announced completion date of the studies.

We searched the biomedical literature for corresponding publications, and if we could not find such a study or such a publication we try to contact the study investigators to identify publications, and in this way, we aim to find out which proportional studies have remained unpublished and among published studies we try to find out whether there was evidence of selective reporting by comparing the registered information with the published data.

Bob Barrett:
Dr. Korevaar, what do you believe were the most important results in your Clinical Chemistry paper?

Daniel Korevaar:
Well, the main thing we found -- we included more than 400 studies and found that only about half of them had been published in fact, and it was only slightly better if we only included the subgroup of studies that had announced to be completed at least 30 months, instead of 18 months prior to our analysis.

And after this we compared the data in the registry with the data in corresponding publications regarding the inclusion criteria, the tests under investigation and the threshold for positivity as regarding the outcomes that have been defined, so in the registry, and the publication. We found discrepancies in at least one of its features in about one third of the published studies.

Bob Barrett:
Do you have an example of such a discrepancy?

Daniel Korevaar:
Yeah, sure. Quite a few studies show discrepancies regarding the primary outcomes. So for example, a registered primary outcome had become a secondary outcome in the full publication, or the other way around. And one study for example, evaluated the ability of an imaging test to detect coronary stenosis after a heart transplantation.

In the registry it was clearly stated that the primary outcome was the sensitivity of that test, and the secondary outcomes were clearly indicated to be specificity, positive predicted value and negative predicted value. And while we found full publication corresponding to that study, and suddenly the negative predicted value have become the primary outcome while the sensitivity have been downgraded to a secondary end point, while it's previously defined as primary end point.

So of course we were not able to determine from the publication why the order of the outcomes had changed, but it was interesting to see that the negative predicted value which is now the primary outcomes was almost a 100% but the sensitivity was only 63%.

So in this example, at least all the outcomes were reported and are available to interested parties, so this is actually more an example of what we call “spin” in abstract rather than selective reporting and this for example, possible to include these results in a systematic review.

However, we also saw quite a few examples of studies that completely omitted the primary outcome from the final publication, the pregistered the primary outcome. I think that’s more problematic because you won't be able to include these findings in a schematic review for example.

Bob Barrett:
Failure to publish in selective reporting have also been invested in other fields of research. Were similar problems identified there?

Daniel Korevaar:
Yeah, investigations in other fields of research we have found similar results. Based on these results it's estimated that only between 45% and 65% of studies get published which is similar to results of our study.

And most of these studies were focused on randomized control trials, those studies that we are aware of the similar things around test accuracy studies and well, the other studies so mostly around randomized control trials that found inconsistencies between registered and published primary outcomes in about 20% to 50% of the studies. And when we look solely at the outcomes we find in the registry and the final publication, we found discrepancies in about one fourth. So yes, it seems that well -- failure to publish and selective reporting are well, frequent among test accuracies studies when compared to previously investigated research areas.

Bob Barrett:
Previous studies have also shown that positive study results have higher chances to get published than negative or inconclusive study results. How does this relate to your study?

Daniel Korevaar:
Yeah, that’s a good question. Previous studies have indeed shown that positive study results have about three times the chance of getting published than the negative study result and unfortunately we were unable to investigate whether similar mechanisms are active among test accuracy studies, basically because of two reasons.

The first reason is that we didn’t have the results of unpublished studies, so we were unable to compare published and unpublished study results, and the second reason is that, the estimated performance of an investigated test usually described the measures of diagnostic accuracy such as sensitivity, specificity or predicated values.

So these are continuous scales and there is usually not a clear cutoff point above which the study result is considered positive. It is in contrast with reporting of randomized controlled trials which usually states a new hypothesis, and compute an associated eValue and this eValue directly determines whether a study result is positive or negative.

So among test accuracy studies non-publication is unlikely to be associated with statistical non significance, but I think yeah, future research though it would be very interesting to find out whether higher estimates with diagnostic accuracy are correlated with higher chances of getting published.

Bob Barrett:
In your article you argue that perspective registration of test accuracy studies may be an important solution in the fight against non-publication and selective reporting. Can you explain why?

Daniel Korevaar:
Well, ICMJE had several reasons for initiating the requirement of registration of clinical trials. As I indicated registration of test accuracy studies is currently not required by the ICMJE but I think that all the reasons for registration also apply to these studies.

If every study is registered then research gaps can be identified and unnecessary duplication of research efforts can be prevented, which would otherwise waste time and money and also the collaboration between researchers can be facilitated.

And perhaps the most important reason for implementing the registration policy were increasing concerns about the large number of studies that remain unpublished in the bio- medical literature. Obviously this brings ethical concerns because the knowledge gained through research in which humans participated remains unpublished.

Bob Barrett:
Finally, doctor how can that affect clinical care?

Daniel Korevaar:
Well, doctors are nowadays educated to work according to evidence based medicine and when treating a patient the doctor cannot solely rely on his own personal, professional experience but has to apply the best available evidence which is obtained through scientific research in that specific setting. To do so, doctors usually rely on gathered information, for example, from systematic reviews, which include all relevant information on specific topic, specific health topic.

If not all information is published and if favorable results have higher chances to get published the negative ones, such reviews will get biased and physicians will be unable to adequately perform evidence based medicine, I think.

Of course this could endanger patient care, so when all research protocols are registered it will be much easier to identify ongoing and unpublished studies and if the study is registered, journal editors will have the opportunity to compare the original protocol with the final publication to identify for example, discrepancies and prevent selective reporting.

So we have now shown that also many test accuracy studies seem to remain unpublished or are selectively published. So yeah, I don’t see any reason why registration of these studies shouldn’t become a requirement as well.

Bob Barrett:
Dr. Daniel Korevaar is from the Department of Clinical Epidemiology, Biostatistics, and Bioinformatics at the Academic Medical Center in Amsterdam. He has been our guest in this podcast looking at non- publication and selective reporting of test accuracy studies. I'm Bob Barrett. Thanks for listening.