Last week we asked the Chefs, and this week we asked the global community: “What would improve trust in peer review?”
Josiline Chigwada, Librarian, Zimbabwe
Peer review can be tricky, especially if there is an option of selecting probable peer reviewers for your article. Some people do that on the basis of friendship and might fail to be strict in the process. This therefore calls for discipline on the part of the reviewer so that he or she can be ethical and look at the paper without regarding any relationship to the author of that particular title. I believe to improve trust, double blind peer review should be used and that the views of at least two reviewers should be used to judge a paper.
Richard de Grijs, Astrophysicist and Journal Editor, Australia
I believe that generating trust in peer review requires transparency and a continuing education of all stakeholders.
As regards the education aspect, the stakeholders include not only the scientists, peer reviewers, and journal editors, but also politicians and interested members of the general public. It is particularly important that these latter two target audiences are educated properly as to what peer review actually entails. Peer review has been established to provide a measure of scrutiny and quality control of scientific/scholarly articles. It is not a one-size-fits-all solution and it doesn’t guarantee that a paper that passed the peer review and editorial process is flawless. The process is aimed at ensuring that an appropriate scientific approach is followed and that the results are defensible when challenged. However, peer reviewers are human and, while usually knowledgeable in their subject area, they may miss subtleties or flaws that defy superficial scrutiny. As such, peer-reviewed articles are not necessarily the ultimate truth; they have simply been subject to scrutiny by knowledgeable peers. These aspects are often overlooked in reporting peer-reviewed results to the general public or politicians, thus creating the false impression that science should be perfect and that scientists never seem willing to commit to firm results without insisting on quoting uncertainties.
In addition to garnering a proper understanding of what the peer-review process entails, transparency is of the utmost importance to provide credibility. I can think of a number of ways in which to improve transparency, and hence credibility, some of which are already being adopted by a range of top journals:
- Double- or triple-blind peer review, thus avoiding a scenario in which peer reviewers or even editors are wowed by famous names among the author list and hence ensuring a more equal approach to peer review, focusing on content rather than prestige.
- Publication of reviewers’ reports (and author responses) alongside the published manuscript, so that readers can check for themselves how criticism was dealt with and whether reviewers were fair and careful in their scrutiny.
- Publication of the names of the peer reviewers, with the full consent of those reviewers. This won’t always be possible or desirable, and it could be problematic if there may be a future power imbalance (e.g., a junior reviewer severely criticizing a very senior scientist who might have a say in the future career of the junior reviewer) — so this could perhaps be encouraged but not be made mandatory.
- Open post-publication peer review, with the option to made additional changes to the published article, provided version control is activated.
Zainab Yunusa-Kaltungo, Plastic Surgeon, Nigeria
My submissions are purely my opinion as a mid-career researcher from a low-income country.
My first peer-review assignment came long before I had been lead author on any paper (I had only been a co-author on three papers at this time); hence I had little experience with what was required of me. I was able to deduce who the lead author on the paper was from the name of the institution and listed qualifications (the author happened to be a friend and a senior colleague). Looking back now, I think most of the review I did was correction of the use of technical terms by the authors. I didn’t even think to review the reference list (an arduous task that often reveals lots of inconsistencies). If I were to review the same article today, I would probably mark it as contributing little to knowledge.
My suggestions to improve peer review are:
- select peer reviewers based on research experience and experience with the peer review process
- use a completely blinded process.
- it wouldn’t be a bad idea if journals include links to open-source peer-review training like the Publons Academy and recommend such trainins to authors and peer reviewers alike.
Fast forward to today (more than 13 years after), my biggest concern is how much work I do that isn’t related to my speciality/subspecialty. I recently took a look at the number of articles I had reviewed for a certain journal and, out of 20, only four were related to my speciality/subspecialty. I used to turn them down until I received a call from someone on the editorial team of a certain journal mentioning challenges about a shortage of willing peer reviewers. Now I accept them knowing it would be a strenuous process because I will have to do a lot of reading and involve colleagues I know in the speciality/subspecialty just to do what I consider a good job. I keep my morale up in doing this job by telling myself ‘someone is likely doing/has done this for your own paper’.
What do I suggest in this scenario? Deliberate and continuous capacity building for and by stakeholders.
I would also be interested to know how readers here feel when they find articles they’ve recommended for rejection are accepted for publication. In one situation, an article I reviewed had conducted a study with a great research design, but the statistical methods (on which they based their conclusions) were grossly faulty, yet despite this it was accepted and published.
Donald Rugira-Kugonza, Uganda
Peer review should as be as blind as possible, this should be for the reviewer as much as it is for the author. I have realized that when authors are asked to propose possible reviewers for their papers, they sometimes inform the people they’ve chosen of the suggestion. Some authors write to reviewers to follow up on comments made. This is very disturbing because once the reviewer may feel unprotected, and could potentially be compromised in the future.
Double blind is the general rule but sometimes it is difficult to ensure. Due to reviewers becoming more and more unavailable, editors now have less ability to refuse using author-proposed reviewers. A foolproof system is necessary to ensure that independence of the review process is assured.
Clear and candid comments to authors even in cases of rejection should help in acceptance and trust in peer review processes.
Ismael Kimirei, Marine Ecosystems Researcher, Tanzania
I understand that there are publishing houses that invest in training reviewers, and Publons offers it as well. One other way would be through MOOCs. MOOCs could be used to train a critical mass of potential reviewers and then I would suggest following the suggestions by Richard and Zainab above.
Alejandra Arreola Triana, Lecturer in Science Writing and Communication, Mexico
That’s a great question. From my colleagues’ and my experience:
- More transparency in peer review (perhaps knowing the names of the reviewerswould help?)
- More honesty from peer reviewers, perhaps declining to review papers where they are not familiar with the methods (I’ve had friends whose papers have been rejected because the reviewer said they don’t know the technique and are not convinced it works as stated)
- Clearer guidelines or a code of conduct for reviewers, for example to keep reviewers from recommending their own papers be cited, or a reminder that papers should not be rejected for language just because someone from another country is the author.
- Empowering authors to reply and challenge inappropriate suggestions, perhaps making peer review a two-way street?
Buna Bhandari, Epidemiologist, Nepal
Based on my experience as an author and peer reviewer, I am sharing some suggestions:
- Peer reviewer selection based on their expertise matching with the paper
- Open peer-review processes
- Clear guidelines, such as peer reviewer’s comments, should focus on improving the quality of the paper, not with the intention of rejection of the paper.
- If there would be some credit/ indirect benefits to the reviewer, then they would be more responsible in terms of timely and quality review, such as a scheme of highlighting the best peer-reviewer or certificate from the journal to motivate peer reviewers
- As Alex mentioned above, the author’s voice should also be respected
- More training or courses in being a peer reviewer would enhance the quality and trust of peer review (as Publons Academy is doing these days)
Alex Mendonça, SciELO, Brazil
Building trust in peer review is an accumulative joint effort
The progress of Open Science practices is renewing the way research is done — arguably not a “new” way of doing things, but rather the “expected right” way of doing them. Every instance and player in the research ecosystem has their share of responsibility over public correctness of research, more than ever before. It is a challenging but needed advance driving a more transparent, trustful and productive process. Under open science modus operandi, the peer review of research projects must comply with the open science attributes of transparency, reproducibility, and reuse of contents used and generated.
Preprints are now one of the first steps in the publishing workflow. The responsibility for trustworthy behavior here falls upon the author. This can be enriched with the sharing of data, code, and other materials used or created by the research.
In their role as platforms for the immediate sharing of research results, preprint servers are responsible for defining and applying a set of minimum requirements and screening processes, as well as effectively communicating to all audiences the non-peer reviewed status of preprints. As artificial intelligence technologies continue to develop, these screening processes will improve progressively.
Granted that many preprints will later be submitted to journals where they will go through a peer review process ending in approval or rejection, others will likely remain in an indeterminate state on a preprint server under the sole responsibility of authors and preprint server administrators.
But it shouldn’t stop there. Readers of preprints, researchers, and other users and stakeholders such as students, librarians, journalists, and citizens can all play a role in this new way of doing research and each has, on different levels, their share of responsibility.
We’d love to hear more from all of you, too. What would improve trust in peer review?