In keeping with our tradition of asking the Chefs to share their thoughts on the Peer Review Week theme each year, ahead of the official week of posts, today’s post invites them answer to the question, “Is research integrity possible without peer review?“. It follows hot on the heels of yesterday’s Ask the Community post, in which you heard from a wide variety of Society for Scholarly Publishing (SSP) and scholarly communications colleagues about their thoughts on the same question. We hope that, together, everyone’s responses will give you plenty of food for thought as we look forward to the eighth annual Peer Review Week, starting on Monday (September 19).
Rick Anderson: Research integrity isn’t possible without stringent and scrupulous article review. Is it possible to conduct such review without the current system of distributed, outsourced article review by peers? Sure – I can imagine, for example, a journal setting up a crew of in-house reviewers with deep expertise in the journal’s subject area and in research design, carefully vetting each submitted article and counseling with the editor-in-chief as to its quality, relevance, and significance. But the key word here is imagine, because while this is possible in theory it’s difficult to see how it could be implemented in practice, at scale. Scholars report that it typically takes at least several hours to review a paper. So let’s posit a journal that gets 2,000 submissions per year (38.5 per week); if effective review of each paper requires, let’s say, four hours, then that’s 154 hours of review per week – or about 4 FTE. That might actually sound reasonable (especially when desk rejection is taken into account), until you consider that a journal that receives only 2,000 submissions per year is not likely to be one that has sufficient resources to employ four full-time reviewers. Bone and Joint Journal, for example, reports that it gets about that level of submissions, and its parent organization (which publishes three other journals as well) appears to have three employees in total. Nature, on the other hand, receives about 11,000 submissions each year; Chemistry of Materials receives more than 5,000, as does the Journal of Clinical Oncology. This illustrates the crucial difference between possible and feasible. Traditional peer review has many problems and weaknesses; unfortunately, they seem to me to be inextricably bound up with the very characteristics that make it feasible: its decentralized organization, its broad distribution, and its voluntary nature.
David Smith: I’m somewhat removed these days from the hurly burly of peer review, so my thoughts should be taken in that context. But my view is that peer review isn’t there to ensure research integrity, it assumes it. If people are going to cheat, then it’s wrong to think that peer review is the trap for such behavior. To me (ex-scholar, reformed) peer review assumes that whilst the research might be completely wrong, it is submitted by peers… and thus it’s axiomatic that they be considered as good actors. Now, peer reviewers can be alert for ethics statements and the other particulars of the field of study in question, but they cannot be expected to catch research malpractice. They do not have the time, the tools, or the technical capability to do this. On top of that, I cannot imagine anyone wanting to suggest, in any way, some impropriety that might or might not be present as a result of a peer review analysis of a given paper. For the legal implications loom large here, and one does not have to look very far to see evidence of the litigious behaviours of those accused of dubious practice.
I cannot imagine that it is the right solution to put forensic image and data analysis tools into the hands of peer reviewers, because it’s the wrong place to do so. If we look at the issues of image and data manipulation, the examples I have seen (without focusing on specifics) are all frankly and bluntly the hallmarks of clear malpractice. This should get trapped and flagged BEFORE it ever gets to a peer reviewer. It seems to me that greater traceability of the elements of published research is vital, so perhaps peer reviewers can have a role to play there in checking that. But bluntly – if a paper has problematic images say, and the authors have not been able to supply a satisfactory answer in a very timely manner, then they shouldn’t get to play. Following on from that, there ought to be an exchange of data between publishers where there are problematic authors who might need greater scrutiny. And yes, that means research institutions ought to be more involved for reasons of a reputational nature. But peer reviewers should be free to focus on the research efforts of the good actors, striving to reduce uncertainty in whatever the field may be.
Haseeb Irfanullah: My short answer is “Yes.” But, before elaborating it, let me share a few thoughts on peer review itself.
Frankly speaking, sometimes I feel that the importance of peer review in the scholarly arena is overrated. As a journal editor, I know too well how challenging it is to get good peer reviewers who have the time and sincerity to read and comment on a manuscript I forward. As a researcher/contributor, I know how my papers get delayed as potential reviewers decline to review it, one after another or take a longer time than initially promised after agreeing to review it. As a reviewer, I know how demanding journals can be to capitalize on (or to exploit) my expertise and altruism for free.
We often say, despite many systemic flaws, peer review is the best quality assurance option we have. But, 1) have we ever measured what “real disasters” are happening (not the imaginary ones) because of articles published in predatory and low quality journals which don’t follow publishing standards, including the peer review process? We however continue to argue against preprints for not being peer-reviewed, and thus being potential time-bombs of misuse. 2) On one hand, we lament not getting enough good peer reviewers for manuscripts submitted to our journals. On the other hand, we believe that open review through engaging the wider community in the review process is effectively possible in a real world. 3) Despite following due peer-review process, Diamond Open Access journals without APCs from the Global South are often undermined by being portrayed as low-quality.
If we now link the peer review process to ensuring research integrity, it indicates that it is not 100%researchers’ responsibility or it is not possible for them to fully ensure research integrity. I think peer reviewers can contribute to ensuring research integrity while reviewing research proposals, which may have different detailed information, components, sections, and/or annexes. But, peer reviewers of research manuscripts can’t effectively do the same because they are dealing with completed research. If peer-reviewers report some indications of a lack of integrity, the manuscript will essentially be rejected. But since it may not be possible for the researchers to overcome those (let’s say, unintended) breaches of integrity identified by the reviewers, researchers/authors will essentially look for a new journal hoping that the new journal’s peer reviewers would not look into such ‘gaps’ so rigorously and so they would get the manuscript published anyway. In such a case, the peer review process fails to ensure research integrity due to loopholes in the publishing system. (Nevertheless, Master’s and PhD theses examiners may ensure research integrity even of a competed research project as there are not many options available to the students/researchers other than to address the formers’ observations/suggestions.)
After publishing, if we focus on the use of these apparently flawed publications, there is no universally adopted system or regulations that stop us from using these research findings by citing them in other publications or by including them to build evidence of a case to change a policy, for example. We may use available list of predatory journals, which are often controversial or are behind a paywall (or the whole idea of defining a journal predatory may itself be branded as ‘weak’). Nevertheless, flawed publications find themselves permanent resident of the knowledge ecosystem and peer review isn’t an effective deterrent.
As researchers, we use literature, both peer-reviewed and non-peer-reviewed/grey, as we feel or believe we need it in designing our research projects or writing the results to communicate. Our peer reviewers may encourage us to cite more peer reviewed literature if the peer-reviewed/non-peer-reviewed materials’ ratio is leaning too far towards non-peer-reviewed ones in a manuscript. But it is rare for reviewers to ask us about the quality of literature cited in our manuscripts.
In recent years, well-known publishers and their journals are increasingly allowing authors to cite preprints in submitted manuscripts. Many are integrating preprints into publishing workflows. The same may be true for some research funders welcoming preprints in research proposals. Are these basically not undermining the way we used to rely on the peer review process in favor instead of relying upon the authors’ judgment?
Our judgement as readers and users of research often defines what is a good piece of research: In the case of peer reviewed articles, it is largely influenced by the journals or publishers in question, and in the case of non-peer reviewed ones, by the host/funding organizations. But we are increasingly relying on authors for research integrity. We should promote scholarly and research ethics by investing more in building the capacity and understanding of those who are doing research and who are using it. We should move away from our over-reliance on peer reviewers and stop branding them as the ‘Guardians of the Scholarly Galaxy’.
Tim Vines: A knee-jerk response: yes. In the same way it’s possible to be completely honest when filling out your tax return, even when it’s very unlikely that your return will be audited.
4 Thoughts on "Ask the Chefs: Is Research Integrity Possible without Peer Review?"
There’s a big problem in the quantitative social sciences that peer review is not solving, and that is when both the author and the reviewers don’t know enough about proper statistical methods. We need publishers to employ (or somehow get volunteers) from outside the discipline but with deep training in statistical methodology to “non-peer review” every submission that has made it through the normal PR process to review the article specifically to look for invalid quantitative methodology.
Interesting, I’ve recently worked with a scientific society that had brought in a huge number of statistical reviewers from outside of their field to help the journal do this sort of review and it has been problematic, as the reviewers’ lack of context and understanding of the research continually confounds authors who are left struggling to respond to out of context requests. What we really need is better statistical training across the board for researchers, which (slowly) seems to be happening. This particular journal’s response has been to replace the non-contextual statistical reviewers with more and more early career researchers who are much better versed in these analyses.
We have the same issues in STEM, made worse by the fact that we cover multiple fields in which researchers present the same information in different formats or units based on what they are trying to convey. One can establish standards for a given field, but trying to do that across a dozen fields is very much like banging that square peg into the round hole over and over again.
As is often the case it depends upon what is meant by peer-review. At well resourced journals virtually all research reports undergo formal internal review, with comments by full-time and/or part-time paid editors. In that case, on a limited basis, research integrity is possible without what most people mean to imply, external peer-review by individuals not related to the journal.
In journals that are less well resourced, research integrity would be difficult to maintain without external peer-review.
Having been an EIC at a very well resourced journal (JAMA) and one less well resourced (ADC) – I always find questions around peer-review limited, because the term is often not defined, and as mentioned above depends upon the resources of the journal.