Even a casual reader of the scientific literature understands that there is an abundance of papers that link some minor detail with article publishing — say, the presence of a punctuation mark, humor, poetic, or popular song reference in one’s title — with increased citations. Do X and you’ll improve your citation impact, the researchers advise.
Over the past few years, the latest trend in X leads to more citations research has focused on minor interventions in social media, like a tweet.
There is considerable rationale for why researchers would show so much enthusiasm for this type of research. Compared to a clinical trial on human subjects, these studies are far easier to conduct.
First, there is no need to seek and secure significant research funds, enroll willing human subjects, or train a cadre of clinicians to run a trial. Further, there are no Institutional Review Boards to placate, protocols to register, or fundamental worries that your intervention has the remote possibility of doing real harm to real patients. As a social media-citation researcher, all you need is a computer, a free account, and a little time. This is probably why there are so many who publish papers claiming that some social media intervention (a tweet, a Facebook like) leads to more paper citations. And given that citation performance is a goal that every author, editor, and publisher wishes to improve, it’s not surprising that these papers get a lot of attention, especially on social media.
The latest of these claims that X leads to more citations, “Twitter promotion is associated with higher citation rates of cardiovascular articles: the ESC Journals Randomized Study” was published on 07 April 2022 in the European Heart Journal (EHJ), a highly-respected cardiology journal. The paper, by Ricardo Ladeiras-Lopes and others, claims that promoting a newly published paper in a tweet, increases its citation performance by 12% over two years. The EHJ also published this research in 2020 as a preliminary study, boasting a citation improvement by a phenomenal 43%.
Media effects on article performance are known to be small, if any, which is why a stunning report of 43% followed by a less stunning 12% should raise some eyebrows. The authors were silent about the abrupt change in results, as they were with other details missing from their paper. They were silent when I asked questions about their methods and analysis, and won’t share their dataset, even for validation purposes. Sadly, this is not a unique experience.
According to the European Heart Journal’s author guidelines, all research papers are required to include a Data Availability Statement, meaning, a disclosure on how readers can get authors’ data when questions arise. In addition, EHJ papers reporting the results of a randomized controlled trial are required to follow CONSORT guidelines, which include an explicit statement on how sample sizes were calculated. Both Data Availability and Sample Size Calculation statements were omitted from this paper. If this paper were about a pharmaceutical intervention for atrial fibrillation, I suspect that these omissions would raise red flags before publication.
Just another underpowered, over-reported study?
Based on my own calculations, I suspect that the Twitter-citation study was acutely underpowered to detect the reported (43% and 12%) citation differences. Based on the journals, types of papers, and primary endpoint described in the paper, I calculated that the researchers would require at least a sample size of 6693 (3347 in each arm) to detect a 1 citation difference after 2 years (SD=14.6, power=80%, alpha=0.05, two-sided test), about ten-times the sample size reported in their paper (N=694). With 347 papers in each study arm, the researchers had a statistical power of just 15%, meaning they had just a 15% chance of discovering a non-null effect if one truly existed. For medical research, power is often set at 80% or 90% when calculating sample sizes. In addition, low statistical power often exaggerates effect sizes. In other words, small sample sizes (even if the sampling was done properly) tend to generate unreliable results that over-report true effects. This could be the reason why one of their studies reports a 43% citation benefit, while the other just 12%.
I contacted the corresponding author, Ladeiras-Lopes, by email with questions about sample size calculation and secondary analyses (sent 5 May 2022) but received no response. One week later, I contacted him again to request a copy of his dataset (sent 10 May 2022), but also received no response. (I do have prior correspondences from Ladeiras-Lopes from earlier this year.)
Abdication of editorial responsibility?
After another week of silence, I contacted the European Heart Journal’s Editor-in-Chief (EiC) on 16 May 2022, asking for the editorial office to become involved. I asked for a copy of the authors’ dataset and for the journal to publish an Editorial Expression of Concern for this paper. The response from the editorial office was an invitation to submit a letter to their Discussion Forum, outlining my concerns. If accepted, EHJ would publish my letter along with the author’s response in a future issue of the journal. The process could take a long time and there was no guarantee that I would end up with a copy of the dataset. I asked for clarification that the editor’s response implied that they were not getting involved in this issue and that this was simply a matter to be discussed between the reader and author. I finally received a response on 7 June 2022 reiterating that a formal Discussion Forum contribution is the first step in the process, and if the authors fail to respond, the editorial office “could escalate [the issue] to the ESC Family Ethics Committee requesting further investigation.”
The European Heart Journal has clear rules for the reporting of scientific results and even clearer rules for the reporting of randomized controlled trials. The validity of the research results described in the paper is irrelevant. Required parts of the paper are missing. Even if my analysis had shown that the conclusions were justifiable, the authors clearly violated two EHJ policies.
Requiring a concerned reader to submit a formal article for publication in the journal seems an inappropriate response to a simple question raised about journal policies. Requiring the reader to jump through time-consuming hoops that involve escalating such a violation to a society-level committee is a clear abdication of editorial responsibility. Either the EiC needs to explain why this paper was excluded from EHJ rules or take action to uphold the editorial standards of the journal by issuing a retraction or, at least, a correction. If the experience I went through was normal, I imagine that most readers with concerns about EHJ papers would simply give up trying. If science is a self-correcting process, this journal is not making the correction process easy.
When I step back and put this Twitter study in perspective, I have to remind myself that it represents little real-world risk: No patients will be harmed or die because of these results; a few researchers even got two publications in a top-tier medical journal. The worst outcome may be a lot of collective time wasted by researchers convinced that X (in this case, tweeting) is going to make a difference in their papers’ citation performance. Sadly, this blog post will add to the article’s Altmetric score and put another colored swirl in its performance donut.
Update (15 June 2022): In the next post, I receive the authors’ data and reveal evidence of p-hacking.