Next week is Peer Review Week 2019. Asking the Chefs a peer review question has become a tradition for us. In 2016, we asked: What is the future of peer review? In 2017, we considered: Should peer review change? Last year we contemplated, How would you ensure diversity in peer review?
This year the theme is quality in peer review. So we’ve asked the Chefs: How do different stakeholders – authors, editors, readers, publishers, the public – value peer review quality?
- Authors value peer review because they need their work to pass muster with their peers in order for it to be taken seriously — not least for purposes of promotion and tenure.
- Editors value peer review because no single editor has either sufficient time or sufficient knowledge to fully judge every article that comes under her stewardship.
- Publishers value peer review because it represents the scholarly or scientific seriousness that is the coin of their commercial realm: what they are selling, in many cases, is access to content that is deemed valuable because it has been rigorously vetted by people who know their stuff.
- And of course informed readers value peer review because it provides a rough cut for them: since they don’t have the time to read and carefully evaluate every paper that might ever be written in their disciplines, they count on each other to contribute to a process that weeds out the nonsense, the special pleading, the irrelevant, and the fundamentally flawed scholarship.
Less obvious is the answer to the question “What does the public value about peer review quality?”, but I think it’s a really important question nevertheless. I would imagine that most members of the general public, if asked what “scholarly peer review” is, would have only a vague idea — and if asked what the criteria of “peer review quality” are, would be even more at a loss. And yet if you were to ask members of the general public whether they think it matters that published science and scholarship be held to high standards of rigor and honesty, they’d all say yes.
The public wants and needs solid, reliable scientific information (even if, in too many cases, we want only solid and reliable scientific information that fits our preconceived political and social agendas). Quality peer review — as distinct from shoddy or halfhearted peer review — has an important role to play in both making such information available and in weeding out scholarship that isn’t accurate or honest. The general public may not know how this filtering process is accomplished, and may not spend much time thinking about it, but I think they generally assume that the process is happening — and want it to be done rigorously and well.
- For authors, peer review is a form of validation for their research. It is a high quality review process that gives them the opportunity to address their peers’ concerns ahead of publication.
- For editors and publishers, there’s a similar reputational element to the peer review process. Neither wants to publish research without ensuring that it’s fit for publication. While even the highest quality peer review process can’t be guaranteed to catch all errors, it does significantly reduce the risk.
- For readers, a strong peer review process reassures them that they can trust the research they’re reading — whether as a researcher seeking to “build on the shoulders of giants” or as a member of the public wanting reliable information about cancer, climate change, child development, or any other topic.
There’s still a lot of work to do in terms of ensuring that all stakeholders have a clear understanding of the peer review process and a way to evaluate its quality (spoiler alert — more on this from Tracey Brown of Sense About Science next week!). But I hope and believe that’s peer review quality is something that all stakeholders value.
There’s still a lot of work to do in terms of ensuring that all stakeholders have a clear understanding of the peer review process and a way to evaluate its quality
Phill Jones: Winston Churchill once said, “…that democracy is the worst form of Government except for all those other forms that have been tried from time to time…”. The news in recent times certainly attests to that observation.
In my experience, many academics view peer-review in a similar way. It’s hard to imagine that passing a manuscript to two or three peers for feedback is going to catch all the possible problems, mistakes, or areas for improvement in a research project. Perhaps a hundred years ago, that was more reasonable, but with computational methods and large datasets increasingly common, peer review alone starts to look unequal to the task. That said, from a reader’s perspective, it’s better than nothing and good peer review can at least catch obvious bad practice, like inappropriate statistical tests, pseudoreplication, or authors simply making claims that the data don’t support.
It’s hard to imagine that passing a manuscript to two or three peers for feedback is going to catch all the possible problems, mistakes, or areas for improvement in a research project.
If good peer review can help prevent some problems, bad peer review can exacerbate them. In the worst cases, peer reviewers can insist on poor or outdated practices simply because that’s how they were taught or that’s how everybody does it, thereby hindering advancement in a field. That’s where editors come in. Good editors can set policy and police peer-reviewers to ensure that they’re holding authors to account correctly and acting as a force for progress in a field.
Finally, as an author. Good peer-review can be incredibly helpful. When a reviewer offers a suggestion to get more information out of the data, or a follow-up experiment for your next project, that can make you feel like part of a community. On the flip side, being subjected to poor or capricious peer-review can be incredibly frustrating. The subject of unfair rejection at the behest of a powerful peer reviewer is difficult and can often be taken as sour grapes. I can say from experience though, that being forced to do an ANOVA on non-parametric data just because that’s the extent of reviewer number 3’s statistical knowledge is annoying to say the least. (Apologies if that seemed oddly specific).
Todd Carpenter: If there is one pillar that sets scholarly publishing apart it is the notion that vetting and peer review, be that editorial or double-blind review, adds significantly to the trust one can place in the content being published.
Some have recently claimed that because there are faults, biases, errors, incompetence, or even malpractice in the current process that the entire shouldn’t be trusted.
Some have recently claimed that because there are faults, biases, errors, incompetence, or even malpractice in the current process that the entire concept shouldn’t be trusted. Thacker and Tennant cherry-picked the problems they focused on to make their point, ignoring the fact that the peer review process vets millions of manuscripts per year and on the whole gets the vast majority of content review reasonably correct. Does that mean that there aren’t errors, that important material is rejected from ’top tier’ journals erroneously? It most certainly does not. Errors do happen. Beyond errors, there are journals that explicitly (or less publicly) have a political bias in their publishing aims, and others are renowned for being “predatory”. Setting this obvious malpractice aside, the process is certainly worthy of trust and is rightly recognized for adding quality.
Just to highlight one error in Thacker and Tennant’s article, they mischaracterize the results in one of the articles they cited, the “ journal editors in our study made good appraisals regarding which articles to desk-reject“ and the “Peer reviewers also appeared to add value to peer review with regards to the promotion and identification of quality”. The more significant problem with Thacker and Tennant’s argument is that it sows doubt into the public’s mind that the scientific process is somehow corrupted and worth challenging, writ large. In the same way a false or withdrawn paper about vaccines causes the public to erroneously question the effectiveness of vaccines. Perhaps the Washington Post could have done a better job by sending the Thacker and Tennant’s article out for a double-blind peer review, which might have caught some of the more obvious errors in the authors’ reasoning.
The question shouldn’t be whether the process is perfect. Rather, as with all scientific processes, does this process yield a better result than might otherwise be expected without it.
The question shouldn’t be whether the process is perfect. Rather, as with all scientific processes, does this process yield a better result than might otherwise be expected without it? And here, the answer is definitely yes. There are data to support this claim. Even anecdotally, one can see this in the behavior of the various market participants. It is clear that authors value this quality, or at least the quality this process confers on the content that results from it. This can be seen from the desire by authors to submit materials to publications with the most rigorous vetting processes (as evidenced by their high rejection rates). You can also see this evidenced by the interest in authors, even well-established scientists in publishing in the “most prestigious” journals in their fields.
For example, an author, who has seniority and respect in their field, that has reached as senior a role in their institution as they want, or can achieve, and has a social following of their work. They would probably receive no benefit from publishing their work in a “top-tier” journal (however that is defined) and yet they do so anyway. They could probably publish the same results in a preprint repository or on their own websites and it would effectively “distribute” their results to the same communities and yet they very rarely do so.
Trust is difficult to gain, it is hard to maintain, and once lost is extremely hard to regain.
Similarly, readers are not regularly trolling the entirety of the web for scholarly works. Researchers have limited time and need to be judicious in the materials they are seeking and the resources that they want to invest time engaging with. There is far too much information on the internet, even in a modestly-sized niche field. Sure, there are outliers and people, and here I particularly note Malcom Gladwell and his love of SSRN as an example — though he’s not technically a scholar in the traditional sense. Preprint communities and independent information sharing have their place in the scholarly landscape and will likely continue to grow. However, for the vast majority of traditional content, for authors, for readers, and for the public, the vetting process that peer review confers on content and the resulting publication process is valuable and worth cherishing. Trust is difficult to gain, it is hard to maintain, and once lost is extremely hard to regain.
Judy Luther: I’ll start with two stakeholders that are not on the list – government regulatory agencies and corporations. Whether it is the Food & Drug Administration in food protection or the Securities & Exchange Commission in financial accounting, government agencies that develop regulations either consider or reference research articles and rely on peer review as a way of verifying the methodology and outcomes. While citing and linking mechanisms in regulatory agencies are not well established and therefore hard to reference, the communities are clear about the value of peer review in the research process.
In terms of the quality of peer review, I doubt that the ‘public’ is aware of the rigor or time involved when a review is done well. They are more likely influenced by a reference to published research as authoritative. Publishers may consider it a competitive advantage in terms of their core audience but with an eye on the bottom line view it as a cost factor and necessary for their journals to secure high rankings.
The remaining stakeholders are those within the academy who are actively engaged in creating, reviewing and reading the research in their fields. Authors value a constructive critique by peers that strengthens their article. Editors value high quality articles and the positive outcomes of peer review. They also influence on the quality reviews by establishing expectations for how the review is done and timeliness of the process.The remaining readers in the field may be unaware of the extent to which articles have been revised unless they themselves are submitting articles as authors for review.
Now it’s YOUR turn!
How do you believe different stakeholders – authors, editors, readers, publishers, the public – value peer review quality?