Editor’s Note: Today’s post is by Mark Bolland, Alison Avenell, and Andrew Grey. Mark is a clinical endocrinologist and Associate Professor of Medicine at the University of Auckland. Alison is a medically trained clinical biochemist and a Professor at the University of Aberdeen, Scotland. Andrew is a clinical endocrinologist and Associate Professor of Medicine at the University of Auckland.
Misinformation is misleading or inaccurate information. The mistake can be honest, or there can be an intention to deceive, in which case it becomes disinformation. Could the failure of a journal to visibly correct known errors in a publication, thereby propagating false information, be considered disinformation?
In October 2021, The Lancet Diabetes & Endocrinology published a Mendelian randomization study which showed:
… for the participants with vitamin D deficiency (25[OH]D concentration <25 nmol/L), genetic analyses provided strong evidence for an inverse association with all-cause mortality (odds ratio [OR] per 10 nmol/L increase in genetically-predicted 25[OH]D concentration 0·69 [95% CI 0·59–0·80]; p<0·0001) …
Mendelian-randomization is a method that allows cause and effect to be assessed based on natural genetic variation. The authors therefore suggested:
… a causal relationship between 25(OH)D concentrations and mortality for individuals with low vitamin D status. Our findings have implications for the design of vitamin D supplementation trials, and potential disease prevention strategies.
In other words, lower blood levels of vitamin D cause early death (rather than just being a marker of poorer health), implying that supplementation with vitamin D would significantly reduce this risk.
The study received a lot of coverage. PlumX metrics show it has been cited 93 times. It has an Altmetric score of 275, in the top 5% of articles. It was accompanied by an editorial which enthusiastically supported the findings, saying they “could have important public health and clinical consequences.”
But in a letter published in January 2023, a correspondent highlighted a major error in the analysis. The original authors reported a null effect of genetically predicted vitamin D levels in the cohort overall (OR 0.99). However, when they stratified the cohort into four groups by genetically predicted vitamin D, the effect estimate was smaller (OR 0.69-0.98) in all the subgroups than the overall effect. The correspondent pointed out that an effect size that was smaller in every subgroup than the overall effect size is impossible for a causal relationship, suggesting that the analysis was flawed. Similar impossible findings were present for other analyses.
In a reply published in the correspondence section of the same issue, the authors agreed with the correspondent, performed a reanalysis using a different model that is more robust to the inherent assumptions underlying Mendelian randomization, and concluded there was no relationship between genetically predicted vitamin D and mortality. That is, the conclusions of the original publication were wrong.
In the same issue, the editorialists said that their previous interpretation was wrong, but that:
In our view, this incorrect use of Mendelian randomization represents an excellent example of the self-correcting nature of science and its ability to continually improve the insights that it affords to society.
Yet the ‘self-correction’ hasn’t happened. Instead, we have an uncorrected publication in which an incorrect analysis was used, which produced results that turned out to be wrong when a better analysis was used. The evidence that the analysis was unreliable was there all along in the publication.
What should the journal do? The Committee on Publication Ethics (COPE) guidance is clear:
Editors should consider retracting a publication if:
-
They have clear evidence that the findings are unreliable, either as a result of major error (e.g., miscalculation or experimental error) …
What has the journal done? It chose to notify readers of the problem in its correspondence section, and 6 months later posted an expression of concern. No changes have been made to the primary publication or the editorial. The downloaded pdf files of the article or editorial do not indicate a problem with the analysis or state that the results are wrong. Readers of the journal article website see direct links to the correspondence, but not the expression of concern. However, the links provide just the title of the correspondence without an indication that the analyses in the article are wrong and have been corrected in the letter. The letter itself has not been cited, while the author response has been cited 3 times. That is, the correct version of the publication is nearly invisible.
Most current readers of the primary publication or editorial will be unaware of the correction or expression of concern. It is difficult to see how they could know that the analyses and interpretations, respectively, are wrong.
Not a unique example
The Lancet Diabetes & Endocrinology took a similar approach in another recent case involving vitamin D. A 2021 meta-analysis of randomized trials concluded that vitamin D supplementation reduced the risk of acute respiratory tract infection.
Two of the included trials were cluster-randomized, but the authors treated them in the meta-analysis as though they were individually randomized. Treating cluster-randomized data as individually randomized is one of the most common mistakes identified by statisticians during peer review. By not accounting for clustering, the effect estimate for a trial becomes much more precise than when it is accounted for, meaning a trial analyzed without accounting for clustering receives greater weighting and undue influence in the pooled meta-analysis.
In this case, the pooled effect estimate from the meta-analysis when the trials were incorrectly analyzed was OR 0·92 (95% CI 0.86-0.99, P=0.02). Only one of the two trials was able to be analyzed as cluster-randomized. When that was done, the pooled effect estimate was OR 0.94, 0.88-1.01, P=0.07. Thus, there was no statistically significant effect seen in the correct analysis, meaning the interpretation of the analysis changed importantly.
How did the Journal handle this problem? They published our letter identifying the error, but would not permit us to report any results. The authors responded to our letter, arguing that the small size of the cluster trial would mean that re-analyzing the result would make no difference to the overall effect estimate, and reported an unchanged pooled effect estimate. However, they subsequently admitted that they made a mistake in their coding, and their original response has now been corrected to indicate the correct effect estimate for their meta-analysis is OR 0·938, 95% CI 0.876-1.005, p=0.068. A PubPeer thread has more detailed information.
Thus, once again we have a publication in which an incorrect analysis was used and when the correct approach is taken, the result for the primary outcome changes, in this case from statistically significant to non-significant.
Again, readers have not been adequately, visibly, and directly informed about this state of affairs. The journal provides links to the correspondence on the article website without any indication of the change in results when the correct analysis is used. The accompanying editorial has not been updated. The original publication has been cited >200 times, whereas our letter has been cited once and the author response has not been cited.
Active misinformation?
What is the word for misinformation that is wilfully propagated? The information in the primary publications in both cases is wrong, and readers will not likely know this. There is no suggestion that the reason for that is anything other than honest errors.
The journal knows the primary publications are wrong, but has chosen not to correct them, instead opting only to provide the correct results, nearly invisibly, in the correspondence sections. Thus, misinformation is being wilfully propagated by an act of omission (failing to correct the publications) rather than an act of commission (deliberately propagating false information). This approach is clearly unsatisfactory. Is it too much of a stretch to say this is disinformation?
It seems reasonable that journals and their publishers, be held responsible both for their acts and their omissions. So, a reasonable conclusion is that the inadequate processes of correcting false information in these two cases turned misinformation into disinformation. Regardless of the exact term used, it is yet another example of how the system for correcting publications is broken and that the most important players appear not to care.