Editor’s Note: We end this year’s Peer Review Week with a guest post on the recent Ninth International Congress on Peer Review and Scientific Publication, by Erin Landis, Meghan McDevitt, and Jason Roberts, respectively Managing Director, Managing Editor, and Senior Partner at Origin Editorial . A list of the presentations they reference is included at the end of the post — more information can only be accessed by Congress attendees at present.

It seems rather appropriate that one of we three authors is sitting by Measurement Park in Toronto to write about the recently concluded Ninth International Congress on Peer Review and Scientific Publication, a quadrennial gathering of some of the world’s most prominent researcher-influencers to report findings on a multitude of problems, issues, and behaviors that vex the peer review and publication processes. It should come as no surprise that peer review continues to be an imperfect system.

This year’s Peer Review Week, which ends today, follows closely behind the Peer Review Congress, and its theme of “Research Integrity: Creating and Supporting Trust in Research” meshes neatly with the stated research interests of the Congress: enhancing quality and credibility of the scientific literature. It does seem the more we look, the more doubts arise about the overall validity, honesty, and utility of published research reports. There are any number of reasons why we should all care about this: growing uncertainty over whether we can believe what we read; the wasting of billions of dollars of research funds; misdirected future research efforts; poor treatment outcomes for patients; the corrosive impact of spin and bias; the ever-growing public mistrust of science and the scientific process; and more.

peer review week logo

Inevitably, the use of words like “integrity” and “trust” in reference to scholarly publishing immediately conjure thoughts of the exact opposite behaviors and their unremitting prevalence. Some of the results presented at the Congress suggest that it’s not malfeasance or misconduct that drive this incongruence. Instead, there is a lack of understanding of the demands and responsibilities associated with publishing research, and a general lack of researcher self-awareness when it comes to understanding their limitations (particularly with regards to study design and methods, as well as bias). By including the word “integrity” in the theme for Peer Review Week, the organizers are likely referencing the quality of honesty. But the secondary meaning of the word “integrity” — the quality of being whole or complete — is also apt. The community of researchers and peer reviewers is a somewhat closed ecosystem: researchers develop studies and other researchers critique those studies, then they perform their own studies, and round the process goes in a relatively closed loop. But, as several presentations at the Congress showed, and as any long-time editor or professional editorial-office staff member can attest, there are many gaps within this ecosystem that undermine the structural integrity of the entire endeavor. They include poor reporting that obfuscates what was done and whether we are being told the complete story; substandard training on basic research methods, resulting in under-powered studies; no grounding in how to successfully write up results for publication; an ongoing pervasiveness of privilege and exclusion that restricts who can participate; a tenuous grasp of what it means to “please research responsibly”; and a lack of systematic ways to eradicate poor practice.

Addressing diversity and inclusion

Reassuringly, the Congress did tackle head-on one of the most pressing issues of our time: representation and inclusion. These issues were called into stark relief after the global social and racial movement during the summer of 2020, and the scholarly publishing ecosystem has been grappling with how to address them ever since. Outcomes of various initiatives were presented at the Congress, demonstrating that while progress has been made, there’s a lot of work still to be done.

In one presentation, Ana-Catarina Pinho-Gomes (1) reported on a cross-sectional study of 47 BMJ journals, which found that women comprised less than one third of peer reviewers. Journals with a higher percentage of women as editors, or a woman as editor-in-chief, also had a higher representation of women as peer reviewers. So, while there continues to be a persisting gender gap, increased representation of women in journal leadership positions may suggest an increase in representation of women as peer reviewers.

Micah Altman and Philip Cohen (2) also examined editorial board diversity. They compared the editorial board composition of open access versus closed journals, using a dataset of 14,228 journal editorial boards. Across all journal types, less than one third (28.7%) were women, although this varied greatly by discipline (26% in engineering journals; 53% in education journals). International diversity was higher in editorial boards of OA journals, however, board composition was disproportionately male and US- or UK-centric across both journal types.

In a study on the geographic diversity of reviewers conducted by Khaoula Ben Messaoud (3), of the 257,025 invited reviewers across 21 BMJ Publishing Group biomedical journals. 90,467 (35.2%) agreed to review. “Compared with high-income countries, agreement was higher among reviewers from lower middle–income countries (3.26 95% CI, 3.06-3.48) and low-income countries (2.99 95% CI, 2.57-3.48) (P < .001).” This suggests that better inclusion might help stem the continued downward pressure of conversion rates of invitations into agreements to review.

Holly Falk-Krzesinski (4) provided a promising update on the multi-publisher Joint Commitment for Action on Inclusion and Diversity in Publishing, whose goal is “to develop gender identity, race, and ethnicity schemas composed of questions and options that would resonate with researchers around the globe and engender their willingness to self-report when presented with the diversity questions within editorial management systems.” After 18 months of research and development, the Joint Commitment collective endorsed their diversity data schemas, which are based on a rigorous process including a large-scale, global survey, review of published literature, and input from working group members and an external subject matter expert. These schemas are publicly shared, and an initial group of publishers recently began implementing them.

How to address the persistent lack of published reports from researchers from low- and middle-income countries (LMICs) is the question that Gandolina Melhem et al., tried to answer with their study, which focused on the correlation between international editorial staff and international publications in leading biomedical journals (5). They hypothesized that greater representation of individuals from LMICs on editorial boards would be associated with more publications from these countries and, based on their analysis of 45 journals, they concluded that there is indeed a moderate increase.

In August 2021 the JAMA published their Updated Guidance on Reporting Race and Ethnicity in Medical and Science Journals. Annette Flanigan et al., set out to determine the impact of the new guidance in their study comparing race and ethnicity reporting in three JAMA Network medical journals before and after implementation of the new guidance (6). Of the 249 articles included in their analysis, more reported how race and ethnicity were determined after the implementation of the new guidance, however, race and ethnicity were still underreported compared to other demographic information such as age, sex, and gender. More progress is needed around the reporting of race and ethnicity and, as this new guidance is implemented by the biomedical publishing community, hopefully these results will look much different over the course of the next several years.

Can research institutions do more?

Sitting through many hours of presentations, it became increasingly frustrating to hear what is so often unspoken: that researchers struggle to define accountability and responsibility in the context of the scientific process. Nicola Di Girolamo et al., (7) estimated that perhaps as much as one third of authorship ascriptions for the research reports they studied displayed evidence of honorific or inappropriate awarding of authorship when measured by ICMJE criteria. Inappropriate authorship — and its downstream impact — has been discussed ad nauseum. Di Girolamo et al., suggest that consensus is still needed on what authorship means and constitutes, but will that ever come to pass? And, if so, how will a culture of accountability be inculcated? Journals, wielding peer review in their armory, will be expected to detect, correct, and enforce, but will institutions also play their part in girding their researchers for the twin challenges of do (research) better and behave better?

What was otherwise a relatively minor presentation perhaps revealed the most exasperating illustration of the apathy towards responsibility that journals are regularly confronted with. Alison Avenell et al., (8) reported on authors of systematic reviews and guidelines, who failed to realize that the articles they cited have been retracted, which in turn may impact outcomes and conclusions in the systematic reviews and guidelines that continue to cite these retracted articles. Avenell and colleagues contacted the authors of 86 publications whose papers were citing retracted work. They only heard back from authors of 51% of the publications (and it took emailing three individual authors per publication, rather than writing solely to the corresponding author, to improve chances of eliciting a response). Even if the authors did respond, a follow-up study one year later found that just nine papers had been corrected. This study is a microcosm of the ongoing malaise with scientific publishing. Perhaps to the casual observer of the scientific publication process it seems to suggest energy is disproportionally expended on the generation of a publication/series of publications rather than into ensuring accuracy, completeness, and transparency. Imagine that! As attendees at the Congress, we find ourselves asking whether journals must carry the burden of improving standards alone? Can institutions do more to promote best practices or more readily offer their support if problems are detected?

Figuring that they alone are probably going to be the agent of change, many journals are increasingly interested in working upstream, supporting authors during the study design phase (e.g., registered reports) or investing in training the next generation (an excellent presentation by Ariel Lyons-Warren (9) demonstrated the success of a mentor-driven training in peer reviewing manuscripts at the journal Neurology). However, journals really need institutional allies if the needle is to shift.

Technological Solutions

Some presenters and audience members suggested that AI offers grounds for hope, by addressing what authors and reviewers, who are only human, cannot. One of the most obvious suggestions was for an automated solution to detect deficiencies in reporting standards (e.g., the automated application of CONSORT criteria for Randomized Controlled Trials) — something the research world is crying out for. Certainly, as research presented by Benjamin Speich et al., (10) demonstrated, relying on regular reviewers to even look for a bare minimum of essential reporting criteria is an exercise in futility. Not that all talk of AI was an idle expression of wishful thinking — Daniel Evanko presented promising data from the real world regarding a machine-assisted process for detecting image duplication (11). A research conference on peer review was never likely going to be a forum for discussing how to fund and implement technical solutions. However, if some of these dreams become realities, or if we can craft business cases to enable an investment of effort, AI solutions can also be used as tools during the authoring process, rather than simply act as somewhat punitive tools of detection.

Manuscripts submitted to journals from paper mills continue to be a concern, and one where editorial offices would glad welcome a technological solution. Anna Abalkina presented results from a study that attempted to identify a set of common factors associated with fraudulent papers from a single paper mill (12). From 2019-2021, at least 43% of papers published in 154 journals could potentially linked to a paper mill. Shockingly, over 800 researchers from 39 countries purchased co-authorships. Common anomalies in questionable papers included authors who did not have shared research interests, were affiliated with different institutions, or whose research focus was not related to the paper’s subject. Detecting potential instances of fraudulent papers submitted to journals is challenging. Fittingly, COPE’s webinar during Peer Review Week focused on some practical steps for managing paper mills and a set of recommended actions is now available, with more guidance to come.These multi-pronged and inclusive efforts provide some guidance to journals encountering submissions from paper mills, but the feasibility of adding more checks at the editorial office level is daunting considering the abundance of work already performed and the challenges of being under-staffed and under-resourced.

Summary

Under- and over-representation of certain voices continues to fuel several of the issues exposed in presentations and posters at the Congress, especially in terms of bias and active/passive resistance to changing current structures. For regular attendees at the meeting, it does feel that the more things change, or efforts at mitigation are attempted, the more everything stays the same. Issues raised at prior Congresses are still being discussed (the poor quality of reporting, for instance) with little evidence of improvement or evolution. To some extent that is due to the familiar voices in any given field still gripping the levers of power or controlling the narrative.  

As in previous years, the Peer Review Congress was crammed full of useful data and some novel research questions. To those of us who work in editorial offices, some of the outcomes usefully validated our experiences and anecdotal evidence (though for sure there was a constant need to check your I-could-have-told-you-that attitude as you hear another elegantly designed study reveal what you see or experience every day). We implore the organizers of future Congresses to extend the conversation to include the practical implementation of some of the solutions presented, or to consider hosting “how do we tackle this?” debates around fixing a problem that has been measured. If the Congress could expand its remit to become an igniter of practice change, we might finally see some evidence of transforming peer review and publication for the better.

Presentations Referenced

  1. Women’s Representation Among Peer Reviewers of Medical Journals: Ana-Catarina Pinho-Gomes, Amy Vassallo, Mark Woodward, Sanne Peters
  2.  Patterns of Gender and International Diversity of Editors and Editorial Boards Among Journals With Open Access Licenses and Open Science Policies: Micah Altman, Philip Cohen
  3. Factors Associated With Geographical Diversity of Reviewers Invited and Agreeing to Review for 21 Biomedical Journals: Khaoula Ben Messaoud, Sara Schroter, Mark Richards, Angèle Gayet-Ageron
  4.  Developing and Testing a Schema for Collecting Information on Gender, Ethnicity, and Race in Scholarly Publishing: Holly Falk-Krzesinski
  5.  Association Between International Editorial Staff and International Publications in Leading Biomedical Journals: Gandolina Melhem, Chris Rees, Bruno Sunguya, Mohsin Ali, Anura Kurpad, Christopher Duggan
  6.  Comparison of Reporting Race and Ethnicity in Medical Journals Before and After Implementation of Reporting Guidance, 2019-2022: Annette Flanagin, Miriam Cintron, Stacy Christiansen, Tracy Frey, Timothy Gray, Iris Lo, Roger Lewis
  7.  Prevalence of Honorary Authorship According to Different Authorship Recommendations and Contributor Role Taxonomy (CRediT) Statements: Nicola Di Girolamo, Reint Meursinge Reynders, Vincent Lariviere, Mostafa Ibrahim
  8.  Effect of Alerting Authors of Systematic Reviews and Guidelines That Research They Cited Had Been Retracted: A Randomized Controlled Trial: Alison Avenell, Mark Bolland, Greg Gamble, Andrew Grey
  9. Assessment of a Structured and Mentored Peer Review Curriculum on Quality of Peer Review: Ariel Lyons-Warren, Whitley Aamodt, Roy Strowd, Kathleen Pieper, José Merino
  10. Reminding Peer Reviewers of the Most Important Reporting Guideline Items to Improve Completeness in Published Articles: Primary Results of 2 Randomized Controlled Trials: Benjamin Speich, Erika Mann, Christof Schönenberger, Katie Mellor, Alexandra Griessbach, Pooja Gandhi, Szimonetta Lohner, Arnav Agarwal, Paula Dhiman, Ayodele Odutayo, Iratxe Puebla, Alejandra Clark, An-Wen Chan, Michael Schlussel, Philippe Ravaud, David Moher, Matthias Briel, Isabelle Boutron, Sara Schroter, Sally Hopewell
  11. Use of an Artificial Intelligence-Based Tool for Detecting Image Duplication Prior to Manuscript Acceptance: Daniel Evanko
  12. Publication and Collaboration Anomalies in Academic Papers Originating From a Russian-Based Paper Mill: Anna Abalkina

Discussion

3 Thoughts on "Guest Post — Integrity and Trust in Peer Reviewed Literature: Will Journals Be Alone in Doing the Heavy Lifting?"

Are these listed presentations available? Also the list goes to 7 but the in-post citations go to 12

Hi Chris, thanks for pointing this out, some of the citations list got inadvertently cut off, and have been added back in now. Sorry for the inconvenience.

Thank you for this summary and highlight of some of the research presented at the Ninth International Congress on Peer Review and Scientific Publication. All of the research presentations, including abstracts, video, and posters will be available on the Peer Review Congress web site (https://peerreviewcongress.org/) later this year.

Comments are closed.