Editor’s Note: Today’s post is by Haseeb Irfanullah and Are Brean. Are is Chief Editor of The Journal of the Norwegian Medical Association, Associate Professor at The Norwegian Academy of Music, and Vice President of the European Association of Science Editors.
The 18th General Assembly and Conference of the European Association of Science Editors (EASE) was held in Oslo, Norway on May 14-16, 2025. The event’s final session was a debate, where Haseeb Irfanullah argued in favor of a motion declaring that journal editors do not need to worry about preventing the spread of misinformation, while Are Brean argued against it. This article is based on that debate.
Haseeb Irfanullah– In favor of the motion
The debate motion can be dissected into three components in it. First, it has a human element. In it, journal editors play diverse roles and responsibilities underscored by a wide range of designations: Chief Editor, Managing Editor, Associate Editor, Subject Editor, Technical Editor, Language Editor, Copy Editor, and so on. The second element is misinformation. Over the two days of the EASE Conference, delegates heard about and discussed misinformation at a length. This is indeed a very important as well as a challenging issue. I noticed diversity in defining ‘misinformation’. There are different aspects, perceptions, and dimensions, governed by a wide range of factors. It is indeed an evolving concept, where context specificity is crucial. But, where does it stand within scholarly publishing? How is it different from academic fraud or academic negligence? How does it originate in and spread from the academic ecosystem? And, what is the relationship between data, information, knowledge, and misinformation? The third and final component of the motion is taking action against spreading misinformation or stopping its harm. This essentially brings together ‘human/role’ and ‘misinformation’.
Within this context, I have three core arguments to support the motion: persona, priority, and position.
Persona:
Editors have specific terms of reference, or job responsibilities to which they abide. These are often guided by community norms, and expectations from the stakeholders in the community and beyond. But, there are certain things editors do which make them ‘editors’. If an editor does something that an author, a peer reviewer, a publisher, an administrator of a research institute, or a government regulatory authority is supposed to do or should do, then the fact is that the editor is not acting as an editor at that very moment. Yes, I know, in small, regional or society journals, the role of editors and other stakeholders may get blurred. There, one person can wear many different hats. But an editor’s hat is always an editor’s hat.
Also, editors already have a lot of small and big tasks to do on any given day to get a journal published. We all know that, but we can always take a quick look into the COPE International Standards for Editors or EASE Toolkit for Journal Editors to remind ourselves. Editors’ tasks are very much linked to journal quality, publishing integrity, quantity of articles, journal indexing, timeliness of the workflow, branding and reputation, coordination, and thinking of growth, for example. But, preventing misinformation from spreading is simply not one of them.
So, tackling the spread of misinformation doesn’t fit with the editors’ persona.
Priority:
Of course, countering the above point, some of you may say, the editor’s role is not static. It is (kind of, always) evolving! And you are very much right about that, especially the need to continuously tackle emerging shocks and stresses shaking the publishing industry, sometimes as an earthquake, sometimes as a tremor.
Let me highlight some pressing issues and challenges editors are facing as I write this piece.
- Editors now have to think about publishing integrity more than ever before: Use, not-use, and misuse of AI; all-out attacks from paper mills and other academic deceptions (like hijacked journals); and the professional dilemma of whether to be involved in so called ‘special issues’ of journals given that they often exploit the industry and the authors.
- When something bad happens, damage minimization is a crucial task of the editors. Some of those incidents, for example, can arise from ethical complaints against authors, even leading to article retraction, or from conflicts among authors.
- Tackling peer-reviewers’ fatigue could drain editors: Dealing with reluctant, non-responsive, and slow-responsive reviewers, as well as with anxious authors who are forced to wait many months to get the first round of reviewers’ comments.
- Editors are increasingly facing push from the wider publishing sector to contribute to the Sustainable Development Goals (SDGs), to reduce carbon footprint, to ensure Diversity, Equity, Inclusion and Accessibility (DEIA) within the system, and to promote early career professionals in the editorial workforce, as much as possible.
- Shrinking research and publication funding is yet another concern, which has seen much discussion in the recent months due to some drastic measures taken by the current Trump administration.
Given these burning concerns, both at personal and journal levels, can we really think that the broader spread of misinformation is a priority for editors right now?
Position:
If you are still not convinced, let me present the final point: Editors are not even in a position where they can prevent misinformation from spreading. Let me explain why (also, see Figure 1 below).
In the research ecosystem, by accessing existing research as global literature, researchers design new research and conduct it. Here, researchers use data and information to create knowledge. Editors deal with this new knowledge, captured in manuscripts, produced after a research project is completed. In the context of scholarly publishing, ‘misinformation’ could originate from ‘published knowledge’ — by misleading, misinterpreting, partially presenting, or suggesting opaque interpretation. Thus, here, misinformation is a badly modified version of knowledge, which is created after publication or communication of research. It means, misinformation comes into play in the ‘use of the research’ stage of the research system. And, editors do not, and also cannot, deal with misinformation because they essentially operate in the ‘communicating research’ space. Again, by law, practice, and social norm, regulatory bodies, fact checkers, media, social media, concerned sectors, and law enforcement agencies have a primary role to play to prevent the origin and spread of misinformation given its social, cultural, economic, and political implications. This in turn affects ‘research impact’. So, trying to prevent misinformation is beyond the editor’s jurisdiction.
Therefore, journal editors do not need to be anxious about preventing misinformation from being spread, because it doesn’t fit with their persona, their priorities, or their position.
Are Brean – Against the motion
Misinformation has always been around. But with the advent of social media, misinformation can be amplified to hitherto unknown proportions. Thanks to modern travel, a physical virus can travel the globe in a few hours. But thanks to the internet, and especially social media, misinformation “viruses” can do the same travel in just milliseconds. Misinformation has thus become one of the major threats to society. It fuels wars, threatens democratic processes and undermines the health of both our planet and the people that inhabit it.
More than half of the American population now at least sometimes gets their news from social media. Social media is rife with misinformation, not least about science, both in user-generated content and in the form of advertising. Misinformation about science has consequences. Declining rates of childhood vaccines are now helping the reemergence of many infectious diseases in the US, with measles threatening to become endemic unless vaccination rates return to previous levels.
One of the problems with misinformation in social media is this: Science’s most important task is to shed light upon a complex world that pretty much lies in darkness. But as everyone who has ever used a flashlight will know: As soon as you cast a light, you also create new shadows. This calls for nuances and caveats. Science and scientific articles are therefore complicated, as any researcher and editor will know. Simple answers to complicated problems are very few and far between (plus we rightfully tend to be very suspicious of such answers). And to make science even more boring, the scientific ideal is one of emotional blandness: Avoid spin, show all relevant data, avoid using emotional wording, do not overuse adjectives, show the nuances – show all the strengths and weaknesses, not only the conclusion. Social media algorithms, on the other hand, favor just the opposite: Short conclusive messages without context or nuance, clickbait headlines made to induce emotion, polarized and polarizing messages, drawing all attention to the most extreme positions.
Neither scientists nor science editors are of course immune to this. In the attention economy, they compete for eyeballs just like everyone else. Scientists need attention when competing for research grants, positions and tenure. Science editors need attention when competing for readers and authors. And they themselves also rely on digital search engines and even social media when looking for information within their field. They are vulnerable to spin and misinformation, as well as papers coming from predatory journals and paper mills. And digital algorithms augment group thinking and confirmation biases – for scientists and science editors just as for the general public.
However, there is one important difference: Scientists are generally more literate and trained in critical reading than the general audience. And science editors are even more so. We are highly trained in looking through academic fog and expert lingo, recognizing spin, half lies and lies based on both good and bad science, exposing biases and not taking assertions at face value. After all; these are the skills we hone every day at work. Therefore; if there is one group that should excel in recognizing misinformation in science, it is science editors. Misinformation is one of the major threats to modern society. For the sake of our common future, every citizen has an obligation to fight it. In my opinion, the most highly skilled among us have the strongest obligation. We, the science editors, are not only advocates of science; we are also among the most highly skilled in recognizing when science is misused. Therefore, we need to worry the most about misinformation being spread.
Discussion
7 Thoughts on "Debate: Journal Editors Do Not Need To Worry About Preventing Misinformation From Being Spread"
Literally the only point of an academic journal is to publish real, verified, filtered, accessible information. If that were to stop the whole house of cards will fall down. The goal going forward is to pressure publishers to support editors in identifying false information.
An incredibly timely and thought-provoking exchange. While I appreciate Haseeb’s structural framing, I must lean toward Are Brean’s perspective. In a time where misinformation is not just pervasive but weaponized, science editors cannot afford neutrality. Their gatekeeping role doesn’t end at peer review; it extends to anticipating how published knowledge might be misused. Editors may not control post-publication discourse, but they can lead in promoting transparency, nuance, and ethical authorship practices that help inoculate public trust against distortion. Editorial responsibility today must evolve with the reality of misinformation’s reach.
However, the editorial gatekeeping function seems to be more or less broken in many journals too.
I mean: all too often an editor blindly trusts reviewers’ reports, likely without having even read the manuscript in question. The so-called button-based voting is the worst sign of this lack of professionalism. To put aside fraud like manipulated peer review, as an author, you nowadays see almost always at least one superficial review. You see also other common sins (e.g., “I would have written a different paper” or “please also do a survey”). And not only that; today, you also see frequently LLM-generated garbage reviews that are close to misinformation themselves. The Times of Higher Education had a good take about this point a few days ago.
Editors need to step in also in this regard, frankly pointing out what is sensible in reviewers’ comments, what is nitpicking, and what is pure nonsense. A lot of good or decent enough research gets rejected for who knows why.
Nature’s recent decision to open peer review reports can be applauded too. Though, I suspect most journals won’t dare to follow suit because their peers’ reviewing is so poor that it would be embarrassing for them.
Thank you for focusing on the weakness of current the peer review process, which I think has lost its human face.
https://scholarlykitchen.sspnet.org/2025/04/09/peer-review-has-lost-its-human-face-so-whats-next/
I don’t think editors can do anything practical to fix it 100%, given the pressure from publishers and authors to publish (and make profit or to sustain, for that matter). If we link the present broken peer-review system with the spread of misinformation, I doubt we can improve the current situation unless we move to an AI-dependent review system, from the present human-dependent review system.
Thanks for the reply! Although I already read it, the article linked is indeed an insightful take for others to read too.
That said, I disagree with the recent push toward LLM-based reviews. And I am not sure whether all the alternatives have been fully explored.
For instance, the practices in computer science conferences might provide a route forward. That is: why reviewers need to be external parties to begin with? With a sufficiently large editorial board with sufficient expertise, editors themselves could easily review manuscripts submitted. After all, they are (or should be) the experts themselves. Granted, this option would only work in specialized journals. But maybe that might be a way forward too instead of the “mega journals” established in the last decade?
Thank you for your comment, Maryam. But, I am not quite sure if editors “can lead in promoting transparency, nuance, and ethical authorship practices that help inoculate public trust against distortion.” The simple reason is how will editors do that? As editors, they can write editorials for their journals and talk about authorship practices you mentioned. Do public read those? Also, what more can they do to build public trust? When they give interviews to media or speak at conferences, or speak to policymakers at roundtable, do they do that as editors or scientists? Indeed the latter. I don’t think the world is ready to hear ‘editors’. May be it recognises scientists, especially the celebrity ones. But, editors? I have my doubts.
What about “first do no harm”?
One of the primary purposes of the medical sciences is to cure illness (using the term broadly). Misinformation cannot cure anything and in all likelihood will do harm.
Furthermore, what does it say about a journal’s affiliated organization–members, leaders, staff–if the content in the journal isn’t above reproach?