Last week we asked the Chefs, and this week we asked the global community: “What would improve trust in peer review?”

peer review week logo

Josiline Chigwada, Librarian, Zimbabwe

Peer review can be tricky, especially if there is an option of selecting probable peer reviewers for your article. Some people do that on the basis of friendship and might fail to be strict in the process. This therefore calls for discipline on the part of the reviewer so that he or she can be ethical and look at the paper without regarding any relationship to the author of that particular title. I believe to improve trust, double blind peer review should be used and that the views of at least two reviewers should be used to judge a paper.

Richard de Grijs, Astrophysicist and Journal Editor, Australia

I believe that generating trust in peer review requires transparency and a continuing education of all stakeholders.

As regards the education aspect, the stakeholders include not only the scientists, peer reviewers, and journal editors, but also politicians and interested members of the general public. It is particularly important that these latter two target audiences are educated properly as to what peer review actually entails. Peer review has been established to provide a measure of scrutiny and quality control of scientific/scholarly articles. It is not a one-size-fits-all solution and it doesn’t guarantee that a paper that passed the peer review and editorial process is flawless. The process is aimed at ensuring that an appropriate scientific approach is followed and that the results are defensible when challenged. However, peer reviewers are human and, while usually knowledgeable in their subject area, they may miss subtleties or flaws that defy superficial scrutiny. As such, peer-reviewed articles are not necessarily the ultimate truth; they have simply been subject to scrutiny by knowledgeable peers. These aspects are often overlooked in reporting peer-reviewed results to the general public or politicians, thus creating the false impression that science should be perfect and that scientists never seem willing to commit to firm results without insisting on quoting uncertainties.

In addition to garnering a proper understanding of what the peer-review process entails, transparency is of the utmost importance to provide credibility. I can think of a number of ways in which to improve transparency, and hence credibility, some of which are already being adopted by a range of top journals:

  • Double- or triple-blind peer review, thus avoiding a scenario in which peer reviewers or even editors are wowed by famous names among the author list and hence ensuring a more equal approach to peer review, focusing on content rather than prestige.
  • Publication of reviewers’ reports (and author responses) alongside the published manuscript, so that readers can check for themselves how criticism was dealt with and whether reviewers were fair and careful in their scrutiny.
  • Publication of the names of the peer reviewers, with the full consent of those reviewers. This won’t always be possible or desirable, and it could be problematic if there may be a future power imbalance (e.g., a junior reviewer severely criticizing a very senior scientist who might have a say in the future career of the junior reviewer) — so this could perhaps be encouraged but not be made mandatory.
  • Open post-publication peer review, with the option to made additional changes to the published article, provided version control is activated.

Zainab Yunusa-Kaltungo, Plastic Surgeon, Nigeria

My submissions are purely my opinion as a mid-career researcher from a low-income country.

My first peer-review assignment came long before I had been lead author on any paper (I had only been a co-author on three papers at this time); hence I had little experience with what was required of me. I was able to deduce who the lead author on the paper was from the name of the institution and listed qualifications (the author happened to be a friend and a senior colleague). Looking back now, I think most of the review I did was correction of the use of technical terms by the authors. I didn’t even think to review the reference list (an arduous task that often reveals lots of inconsistencies). If I were to review the same article today, I would probably mark it as contributing little to knowledge.

My suggestions to improve peer review are:

  • select peer reviewers based on research experience and experience with the peer review process
  • use a completely blinded process.
  • it wouldn’t be a bad idea if journals include links to open-source peer-review training like the Publons Academy and recommend such trainins to authors and peer reviewers alike.

Fast forward to today (more than 13 years after), my biggest concern is how much work I do that isn’t related to my speciality/subspecialty. I recently took a look at the number of articles I had reviewed for a certain journal and, out of 20, only four were related to my speciality/subspecialty. I used to turn them down until I received a call from someone on the editorial team of a certain journal mentioning challenges about a shortage of willing peer reviewers. Now I accept them knowing it would be a strenuous process because I will have to do a lot of reading and involve colleagues I know in the speciality/subspecialty just to do what I consider a good job. I keep my morale up in doing this job by telling myself ‘someone is likely doing/has done this for your own paper’.

What do I suggest in this scenario? Deliberate and continuous capacity building for and by stakeholders.

I would also be interested to know how readers here feel when they find articles they’ve recommended for rejection are accepted for publication. In one situation, an article I reviewed had conducted a study with a great research design, but the statistical methods (on which they based their conclusions) were grossly faulty, yet despite this it was accepted and published.

Donald Rugira-Kugonza, Uganda

Peer review should as be as blind as possible, this should be for the reviewer as much as it is for the author. I have realized that when authors are asked to propose possible reviewers for their papers, they sometimes inform the people they’ve chosen of the suggestion. Some authors write to reviewers to follow up on comments made. This is very disturbing because once the reviewer may feel unprotected, and could potentially be compromised in the future.

Double blind is the general rule but sometimes it is difficult to ensure. Due to reviewers becoming more and more unavailable, editors now have less ability to refuse using author-proposed reviewers. A foolproof system is necessary to ensure that independence of the review process is assured.

Clear and candid comments to authors even in cases of rejection should help in acceptance and trust in peer review processes.

Ismael Kimirei, Marine Ecosystems Researcher, Tanzania

I understand that there are publishing houses that invest in training reviewers, and Publons offers it as well. One other way would be through MOOCs. MOOCs could be used to train a critical mass of potential reviewers and then I would suggest following the suggestions by Richard and Zainab above.

Alejandra Arreola Triana, Lecturer in Science Writing and Communication, Mexico

That’s a great question. From my colleagues’ and my experience:

  • More transparency in peer review (perhaps knowing the names of the reviewerswould help?)
  • More honesty from peer reviewers, perhaps declining to review papers where they are not familiar with the methods (I’ve had friends whose papers have been rejected because the reviewer said they don’t know the technique and are not convinced it works as stated)
  • Clearer guidelines or a code of conduct for reviewers, for example to keep reviewers from recommending their own papers be cited, or a reminder that papers should not be rejected for language just because someone from another country is the author.
  • Empowering authors to reply and challenge inappropriate suggestions, perhaps making peer review a two-way street?

Buna Bhandari, Epidemiologist, Nepal

Based on my experience as an author and peer reviewer, I am sharing some suggestions:

  • Peer reviewer selection based on their expertise matching with the paper
  • Open peer-review processes
  • Clear guidelines, such as peer reviewer’s comments, should focus on improving the quality of the paper, not with the intention of rejection of the paper.
  • If there would be some credit/ indirect benefits to the reviewer, then they would be more responsible in terms of timely and quality review, such as a scheme of highlighting the best peer-reviewer or certificate from the journal to motivate peer reviewers
  • As Alex mentioned above, the author’s voice should also be respected
  • More training or courses in being a peer reviewer would enhance the quality and trust of peer review (as Publons Academy is doing these days)

Alex Mendonça, SciELO, Brazil

Building trust in peer review is an accumulative joint effort

The progress of Open Science practices is renewing the way research is done — arguably not a “new” way of doing things, but rather the “expected right” way of doing them. Every instance and player in the research ecosystem has their share of responsibility over public correctness of research, more than ever before. It is a challenging but needed advance driving a more transparent, trustful and productive process. Under open science modus operandi, the peer review of research projects must comply with the open science attributes of transparency, reproducibility, and reuse of contents used and generated.

Preprints are now one of the first steps in the publishing workflow. The responsibility for trustworthy behavior here falls upon the author. This can be enriched with the sharing of data, code, and other materials used or created by the research.

In their role as platforms for the immediate sharing of research results, preprint servers are responsible for defining and applying a set of minimum requirements and screening processes, as well as effectively communicating to all audiences the non-peer reviewed status of preprints. As artificial intelligence technologies continue to develop, these screening processes will improve progressively.

Granted that many preprints will later be submitted to journals where they will go through a peer review process ending in approval or rejection, others will likely remain in an indeterminate state on a preprint server under the sole responsibility of authors and preprint server administrators.

But it shouldn’t stop there. Readers of preprints, researchers, and other users and stakeholders such as students, librarians, journalists, and citizens can all play a role in this new way of doing research and each has, on different levels, their share of responsibility.

We’d love to hear more from all of you, too.  What would improve trust in peer review?

Siân Harris

Siân Harris

Siân Harris is Publications and Engagement Manager at INASP, an international development organization that supports the production, sharing and use of research and knowledge in more than 25 countries in Africa, Asia and Latin America. In addition to her main job of looking after INASP’s publications and media relationships, she is also a mentor with INASP’s AuthorAID project and a member of the Think.Check.Submit. committee. Before joining INASP, Siân was editor of Research Information magazine for over a decade and a writer and editor on several science and technology publications at the Institute of Physics Publishing and other publishers. She has a PhD in inorganic chemistry from the University of Bristol, UK where she also worked part-time for the university library. She tweets as INASP at @INASPinfo and (in a predominantly personal capacity) as @sianharris8.


14 Thoughts on "Ask the Community: What Would Improve Trust in Peer Review?"

Bravo! Richard de Grijs recommends “Open post-publication peer review, with the option to made additional changes to the published article, provided version control is activated.” This provides valuable feedback on pre-publication peer review and alerts editors to the existence of expertise they may not have been aware of. Sadly, a major channel for post-publication peer review by credited, non-anonymous, reviewers – PubMed Commons – was ditched by the NCBI as a failed “experiment” after a five year trial.

Before sending the manuscript for peer review, the people involved in the pre-review process at the journal office, need to play their role in:
Identifying appropriate reviewers ( External / internal).
Reviewers should not be on the journal editorial board.
in a journal owned by a specialty association / society, no article of the editorial board members be published.
On receipt of the reviewed manuscript, the journal editorial board should:
Look thoroughly at the reviewers comments and not just send / forward the reviewers comments / decision to the authors.
In fact, the editorial board members should be learned people not just men of authority or just having their names on the board with no interest in doing their job properly.

I am on the editorial boards of four specialist journals in my field. These are journals in which I publish often. Indeed probably half of my publications. There are virtually no other good journals in this specialty, or, when good, they may be inappropriate for other reasons. The effect of one of your proposals would be to make me absolutely refuse to undertake editorial work. Others would be in the same position, and it would become impossible to fill board positions with appropriately qualified people.

Agreed, I think the proposed rules would make many journals cease to function. Really good peer reviewers are often invited to serve on editorial boards — if doing so eliminates them from the peer reviewer pool, then you’re penalizing those who do the best job of working with the literature. If you don’t allow editorial board members to publish in the journal, then you’re going to have an editorial board that doesn’t represent your field well or that consists entirely of authors whose work is not up to the quality levels needed to publish in the journal.

Any legitimate journal has a set of ethical practices in place to avoid any conflicts of interest among its editors/editorial boards. Authors who also serve as editors recuse themselves entirely from the decision process which is turned over to other members of the board, and ideally, review is double blind as well.

Trust comes with time and with peer review it comes with the reviewer’s seriousness & dedication to reviewing the work. Abiding to the ethical practices is the key and every role should play their vital part in performing to the best practice.
Authors should not go for preferred reviewers; publishers/editors should adhere to best practice in selecting reviewers and making sure that we have minimum retractions & editorial expressions of concerns. This can only happen if we are stringent at the initial screening of manuscript and complete the whole process of the publication process with adopting to the right practices. The whole publishing process cycle is interdependent and we should not bring in any competition/grudge/favors/influence to it in order to provide authentic research outcomes. There are number of realtime peer review cases on Committee on Publication Ethics(COPE), which highlights such issues and provide solution to these problems. We should attend to such examples and refrain from malpractices in future and then only trust in peer review can be build overtime.

This posting contains good ideas and bad ideas. Here are some of my views as a former Editor-in-Chief.
1. Reviewing outside one’s expertise. Are you crazy? Never let an editor bully you into reviewing something you know little about. If I wanted blissful ignorance in reviews, I’d do them myself. If qualified reviewers can’t be found, the proper thing for an editor to do is to return the manuscript without review (a nonjudgemental rejection) and suggest another journal.
2. Self-citing reviewer recommendations. Editors should examine all review comments with a careful eye for unethical practices. Simply tallying summary scores is not sufficient. I sometimes told authors they should ignore an unprofessional comment.
3. Post-publication reviews. Don’t most journals have a discussion/reply/erratum process? This is the proper and open way to tag faulty science and give credit to the people who find it.

Yes, Ken, many journals do have such a process, but it is generally cumbersome. Credited non-anonymous reviewers with the appropriate expertise are likely to have many calls on their time. The beauty of PubMed Commons was its linkage with PubMed abstracts that are often the first port-of-call for a reviewer when checking a reference in a paper. If in doubt, one could then go with one click to PubMed Commons to see if there had been any post-publication review comments. Only then, if still in doubt, did one need to go to the paper itself (which rarely would have post-publication review comments attached). PubMed Commons made reviewing a lighter task and thus made it more likely that a reviewer would accept your invitation to engage, so you could then avoid a “non-judgemental rejection.”

Wouldn’t that hypothetical person be better served by a system of open peer review in which the original peer reviews on the paper were published alongside it? That way you’d ensure a chance to look at reviews for every paper, rather than the paltry few that received post-publication peer review in PubMed Commons (which was shut down due to lack of interest/activity from the community)?

Yes, David, editors could arrange that the original peer reviews were a click away. Depending on the journal they would usually be anonymous. Indeed, the reviewers (and reviewed non-anonymous authors, not to mention the non-anonymous editors) might be embarrassed if reviewing errors were later brought to light by an extra layer of post-publication review. As for “paltry few” there were over 7000 entries and that was for starters. The publication system, as you know, was then (and still is) going through much introspective recalibration. The quality of many of the reviews was high. The “experiment” worked sufficiently to encourage its continuation.

7,000 entries for over 30 million papers is a response rate of .02333%, so perhaps “paltry” is a bit generous, and not indicative of a comprehensive resource.

In nearly every post on The Scholarly Kitchen where peer review is mentioned, and on many where it is not relevant, you offer up basically the same comment lamenting the NLM’s decision to discontinue a project they decided was not of value. Have you raised these complaints with the NLM itself? I’m not sure that repeatedly beating the same dead horse in TSK comments is going to make any difference to them nor cause a revival of the canceled project.

If this is indeed such a vital service that the research community is clamoring for, why not start your own version of it? If there is a market need, then this will be a resounding success and you will prove the NLM wrong for their view that this was not something that researchers cared much about. I’m not sure what you hope to accomplish with all of these comments.

Thank you for providing a forum for this viewpoint (a forum that in some respects follows on from Bionet.journals.note that I began in the 1990s with the support of certain publishers).

Many members of the research community seek the truth. As part of this they do research. That research has to be published in some form. Reconciling efficiency of truth-seeking and “market need” is the calibration of our times. Had an experiment, starting with “0.02333%” decades ago, taken off, we might not be suffering so severely from present ills.

But it did not take off, despite the NLM investing time and effort into it for five years. In that time there have been, and remain, other forums for post-publication peer review as well. None seem to have provided relief for perceived present ills. Does it make sense to keep repeating a failed activity in hopes that something will change, or would it be better to instead try something different?

Regardless, PubMed Commons was shut down more than 2.5 years ago, and you have been repeatedly lodging complaints in this forum since early 2018 on the subject. Again I ask, what goal do you hope to accomplish here, and given that the service remains defunct, is repeating a failed activity likely to generate success?

“Again you ask” David, because you were not satisfied with my responses. “What goal do you hope to accomplish here?” Cease and desist? Perhaps some other researcher, having viewed lamentations similar to mine that can be found by clicking around the NCBI pages, may choose to join this conversation on Monday morning. I will be working over the weekend with a Chinese collaborator in an attempt to identify, if there is one, the Achilles heel of SARS-CoV-2. I repeat this failed activity time and again. I will not be tempted to “try something different.”

I would argue that we’ve been more than patient and generous with your comments. For nearly 3 years, you have steadily posted more or less the same comment, over and over again, often hijacking the comment threads of entirely unrelated posts. Many commenters in the past have been banned for less. There is now a voluminous body of content in our comments where your complaints about the NLM remain publicly available. After nearly 3 years, it is clear that posting the same complaint over and over again in our comments is not going to generate some grassroots movement to overthrow the NLM and re-establish this now long-shuttered program.

The purpose of this blog is to be informative, not to serve as your personal soapbox for your personal issue of choice. Our comment moderation policy is clear, and states that “beating a dead horse” is cause for comments to be moderated:
I will refrain from suggesting that in the future you do something different (although in my view, taking action to correct a problem is a more effective means of solving it rather than just whining about the problem for years in hope that someone else will fix it for you), but I will suggest that continue your complaining elsewhere.

Comments are closed.