Next week is Peer Review Week 2019. Asking the Chefs a peer review question has become a tradition for us. In 2016, we asked: What is the future of peer review? In 2017, we considered: Should peer review change? Last year we contemplated, How would you ensure diversity in peer review?

This year the theme is quality in peer review. So we’ve asked the Chefs: How do different stakeholders – authors, editors, readers, publishers, the public – value peer review quality?

peer review week 2019 logo

Rick Anderson: Some of the answers to this question are pretty obvious:
  • Authors value peer review because they need their work to pass muster with their peers in order for it to be taken seriously — not least for purposes of promotion and tenure.
  • Editors value peer review because no single editor has either sufficient time or sufficient knowledge to fully judge every article that comes under her stewardship.
  • Publishers value peer review because it represents the scholarly or scientific seriousness that is the coin of their commercial realm: what they are selling, in many cases, is access to content that is deemed valuable because it has been rigorously vetted by people who know their stuff.
  • And of course informed readers value peer review because it provides a rough cut for them: since they don’t have the time to read and carefully evaluate every paper that might ever be written in their disciplines, they count on each other to contribute to a process that weeds out the nonsense, the special pleading, the irrelevant, and the fundamentally flawed scholarship.

Less obvious is the answer to the question “What does the public value about peer review quality?”, but I think it’s a really important question nevertheless. I would imagine that most members of the general public, if asked what “scholarly peer review” is, would have only a vague idea — and if asked what the criteria of “peer review quality” are, would be even more at a loss. And yet if you were to ask members of the general public whether they think it matters that published science and scholarship be held to high standards of rigor and honesty, they’d all say yes.

The public wants and needs solid, reliable scientific information (even if, in too many cases, we want only solid and reliable scientific information that fits our preconceived political and social agendas). Quality peer review — as distinct from shoddy or halfhearted peer review — has an important role to play in both making such information available and in weeding out scholarship that isn’t accurate or honest. The general public may not know how this filtering process is accomplished, and may not spend much time thinking about it, but I think they generally assume that the process is happening — and want it to be done rigorously and well.

Tim Vines: I’m going to tackle this in something resembling reverse order. In an ideal world, the public would be unable to perceive quality in peer review because all published articles would have been thoroughly reviewed (and revised) and there would be little difference in quality among articles, no matter which journal they appear in. In reality, quality peer review affects the public most in its absence, particularly when flawed research is communicated with an overblown press release and uncritical media coverage. The ‘in mice’ twitter feed is a refreshing and often hilarious reminder of the need for press release restraint.
The publication of a low-quality, high-impact article is also the moment when publishers most wish their journals had quality peer review, as they then get flak from all sides; this is especially the case for publishers that are popular punching bags (Elsevier), or those that are perceived to influence their peer review process in favor of acceptance (e.g., Frontiers). Otherwise, publishers most value quality peer review through its positive effects on the reputation of their journals: more exacting peer review should ultimately lead to better articles, and hence higher Impact Factors and increased submissions.
Quality peer review removes substantial methodological and logical issues from the article, allowing readers to assume that the article is largely correct and instead focus on how it informs their own research and the direction of the field as a whole. Finding major flaws is exasperating because readers must then decide whether to abandon the article as a lost cause, or comb through it to establish which aspects are robust and which are garbage. Researchers reading the literature thus principally value quality peer review as a time-saving measure: someone else has spent the time fixing the articles that can be fixed and nixing the ones that can’t so they don’t have to.
Editors are the ones responsible for ensuring a consistently rigorous peer review process; some place a very high value on quality and put in the diligence needed to make it happen, while others see an Editor position as a sinecure and just go through the motions. The latter are principally responsible for the lapses in peer review that affect the groups discussed above. (Reviewers are akin to Editors, but are directly responsible for their own evaluation of the article rather than its peer review process as a whole).
Authors value quality peer review like they value the dentist: painful and time consuming, but vital for minimizing the risk of nastier problems in the future. Instead of fillings, weak peer review allows the publication of bad papers. Some day, that bad paper might need to be painfully extracted from the literature while all your colleagues look on.”
Alice Meadows: Ultimately, I think that all stakeholders value peer review quality for the same reason — because it makes the research they’re doing/reading/evaluating/making use of better. It’s a kind of safety check, which provides everyone with a level of trust that is missing from so many other types of information that we consume.
However, each stakeholder group has a slightly different perspective on the value of peer review quality.
  • For authors, peer review is a form of validation for their research. It is a high quality review process that gives thethe opportunity to address their peers’ concerns ahead of publication.
  • For editors and publishers, there’s a similar reputational element to the peer review process. Neither wants to publish research without ensuring that it’s fit for publication. While even the highest quality peer review process can’t be guaranteed to catch all errors, it does significantly reduce the risk.
  • For readers, a strong peer review process reassures them that they can trust the research they’re reading — whether as a researcher seeking to “build on the shoulders of giants” or as a member of the public wanting reliable information about cancer, climate change, child development, or any other topic.

There’s still a lot of work to do in terms of ensuring that all stakeholders have a clear understanding of the peer review process and a way to evaluate its quality (spoiler alert — more on this from Tracey Brown of Sense About Science next week!). But I hope and believe that’s peer review quality is something that all stakeholders value.

There’s still a lot of work to do in terms of ensuring that all stakeholders have a clear understanding of the peer review process and a way to evaluate its quality

Haseeb Irfanullah: Different stakeholders of scholarly publishing in Bangladesh see the value of a good peer review process differently. I’ve collected some thoughts on peer review from a variety of these stakeholders below:
To an author, a good peer review may mean reinforcing the knowledge she has gathered through her research. The same person, as a reviewer, may see co-benefit out of peer-review process as a member of the larger scholarly community. Also, engaging in peer review would lead her to ‘forced reading’ of a manuscript critically, which is sometimes − very honestly speaking − missing in reading a published paper! – Samiya A. Selim, an academic
A good peer reviewer can be an influencer! She could help the authors better develop originality in their analytical approaches and in articulating novel arguments. Such a role could be very useful for authors in the Global South. − Lailufar Yasmin, an academic
The importance of good peer review is threefold. It ensures the standards of a journal and the papers published in it. It enhances the quality of the authors too. And, for journals published by societies of developing nations, it helps to highlight the country and its research by ensuring quality. − Rakha Hari Sarker, an editor
In absence of a reliable peer-review system, an editor struggles immensely in publishing her journal to a level that maintains acceptable standards. Moreover, given the very small national expert pool in a specific discipline, journals from the South may find their peer-review process compromised. – Md. Anwarul Islam, an editor
Journalists in general value the peer-review process as it authenticates information and analysis. But sometimes they follow the author-reader-text theories. In a journalistic view, the author matters. The same text from different authors can be of different value to the readers. So, the peer-review process in scholarly journals may not always be significant in journalistic processes. − Sheikh Rokon, a journalist
In the civil society, there is often a lack of exposure to scholarly publishing, which leads to a large amount of grey literature by the NGOs. There are, however, people who see the value in publishing in peer-reviewed journals for evidence-based advocacy. However, finding a journal with right experts to accurately review the content, or one that reaches the appropriate audience is challenging. These factors become most important when a published article is used to challenge the policy-makers, as they tend to question the authority of the journal and the author. − Enamul Mazid Khan Siddique, a development practitioner

Phill Jones: Winston Churchill once said, “…that democracy is the worst form of Government except for all those other forms that have been tried from time to time…”. The news in recent times certainly attests to that observation.

In my experience, many academics view peer-review in a similar way. It’s hard to imagine that passing a manuscript to two or three peers for feedback is going to catch all the possible problems, mistakes, or areas for improvement in a research project. Perhaps a hundred years ago, that was more reasonable, but with computational methods and large datasets increasingly common, peer review alone starts to look unequal to the task. That said, from a reader’s perspective, it’s better than nothing and good peer review can at least catch obvious bad practice, like inappropriate statistical tests, pseudoreplication, or authors simply making claims that the data don’t support.

It’s hard to imagine that passing a manuscript to two or three peers for feedback is going to catch all the possible problems, mistakes, or areas for improvement in a research project.

If good peer review can help prevent some problems, bad peer review can exacerbate them. In the worst cases, peer reviewers can insist on poor or outdated practices simply because that’s how they were taught or that’s how everybody does it, thereby hindering advancement in a field. That’s where editors come in. Good editors can set policy and police peer-reviewers to ensure that they’re holding authors to account correctly and acting as a force for progress in a field.

Finally, as an author. Good peer-review can be incredibly helpful. When a reviewer offers a suggestion to get more information out of the data, or a follow-up experiment for your next project, that can make you feel like part of a community. On the flip side, being subjected to poor or capricious peer-review can be incredibly frustrating. The subject of unfair rejection at the behest of a powerful peer reviewer is difficult and can often be taken as sour grapes. I can say from experience though, that being forced to do an ANOVA on non-parametric data just because that’s the extent of reviewer number 3’s statistical knowledge is annoying to say the least. (Apologies if that seemed oddly specific).

Todd Carpenter: If there is one pillar that sets scholarly publishing apart it is the notion that vetting and peer review, be that editorial or double-blind review, adds significantly to the trust one can place in the content being published.

Some have recently claimed that because there are faults, biases, errors, incompetence, or even malpractice in the current process that the entire shouldn’t be trusted.

Some have recently claimed that because there are faults, biases, errors, incompetence, or even malpractice in the current process that the entire concept shouldn’t be trusted. Thacker and Tennant cherry-picked the problems they focused on to make their point, ignoring the fact that the peer review process vets millions of manuscripts per year and on the whole gets the vast majority of content review reasonably correct. Does that mean that there aren’t errors, that important material is rejected from ’top tier’ journals erroneously? It most certainly does not. Errors do happen. Beyond errors, there are journals that explicitly (or less publicly) have a political bias in their publishing aims, and others are renowned for being “predatory”. Setting this obvious malpractice aside, the process is certainly worthy of trust and is rightly recognized for adding quality.

Just to highlight one error in Thacker and Tennant’s article, they mischaracterize the results in one of the articles they cited, the “ journal editors in our study made good appraisals regarding which articles to desk-reject“ and the “Peer reviewers also appeared to add value to peer review with regards to the promotion and identification of quality”. The more significant problem with Thacker and Tennant’s argument is that it sows doubt into the public’s mind that the scientific process is somehow corrupted and worth challenging, writ large. In the same way a false or withdrawn paper about vaccines causes the public to erroneously question the effectiveness of vaccines. Perhaps the Washington Post could have done a better job by sending the Thacker and Tennant’s article out for a double-blind peer review, which might have caught some of the more obvious errors in the authors’ reasoning.

The question shouldn’t be whether the process is perfect. Rather, as with all scientific processes, does this process yield a better result than might otherwise be expected without it.

The question shouldn’t be whether the process is perfect. Rather, as with all scientific processes, does this process yield a better result than might otherwise be expected without it? And here, the answer is definitely yes. There are data to support this claim. Even anecdotally, one can see this in the behavior of the various market participants. It is clear that authors value this quality, or at least the quality this process confers on the content that results from it. This can be seen from the desire by authors to submit materials to publications with the most rigorous vetting processes (as evidenced by their high rejection rates). You can also see this evidenced by the interest in authors, even well-established scientists in publishing in the “most prestigious” journals in their fields.

For example, an author, who has seniority and respect in their field, that has reached as senior a role in their institution as they want, or can achieve, and has a social following of their work. They would probably receive no benefit from publishing their work in a “top-tier” journal (however that is defined) and yet they do so anyway. They could probably publish the same results in a preprint repository or on their own websites and it would effectively “distribute” their results to the same communities and yet they very rarely do so.

Trust is difficult to gain, it is hard to maintain, and once lost is extremely hard to regain.

Similarly, readers are not regularly trolling the entirety of the web for scholarly works. Researchers have limited time and need to be judicious in the materials they are seeking and the resources that they want to invest time engaging with. There is far too much information on the internet, even in a modestly-sized niche field. Sure, there are outliers and people, and here I particularly note Malcom Gladwell and his love of SSRN as an example — though he’s not technically a scholar in the traditional sense. Preprint communities and independent information sharing have their place in the scholarly landscape and will likely continue to grow. However, for the vast majority of traditional content, for authors, for readers, and for the public, the vetting process that peer review confers on content and the resulting publication process is valuable and worth cherishing. Trust is difficult to gain, it is hard to maintain, and once lost is extremely hard to regain.

Judy Luther: I’ll start with two stakeholders that are not on the list – government regulatory agencies and corporations.  Whether it is the Food & Drug Administration in food protection or the Securities & Exchange Commission in financial accounting, government agencies that develop regulations either consider or reference research articles and rely on peer review as a way of verifying the methodology and outcomes. While citing and linking mechanisms in regulatory agencies are not well established and therefore hard to reference, the communities are clear about the value of peer review in the research process.

In terms of the quality of peer review, I doubt that the ‘public’ is aware of the rigor or time involved when a review is done well.  They are more likely influenced by a reference to published research as authoritative.  Publishers may consider it a competitive advantage in terms of their core audience but with an eye on the bottom line view it as a cost factor and necessary for their journals to secure high rankings.

The remaining stakeholders are those within the academy who are actively engaged in creating, reviewing and reading the research in their fields. Authors value a constructive critique by peers that strengthens their article. Editors value high quality articles and the positive outcomes of peer review. They also influence on the quality reviews by establishing expectations for how the review is done and timeliness of the process.The remaining readers in the field may be unaware of the extent to which articles have been revised unless they themselves are submitting articles as authors for review.


Now it’s YOUR turn!

How do you believe different stakeholders – authors, editors, readers, publishers, the public – value peer review quality?

Ann Michael

Ann Michael

Ann Michael is Chief Transformation Officer at AIP Publishing, leading the Data & Analytics, Product Innovation, Strategic Alignment Office, and Product Development and Operations teams. She also serves as Board Chair of Delta Think, a consultancy focused on strategy and innovation in scholarly communications. Throughout her career she has gained broad exposure to society and commercial scholarly publishers, librarians and library consortia, funders, and researchers. As an ardent believer in data informed decision-making, Ann was instrumental in the 2017 launch of the Delta Think Open Access Data & Analytics Tool, which tracks and assesses the impact of open access uptake and policies on the scholarly communications ecosystem. Additionally, Ann has served as Chief Digital Officer at PLOS, charged with driving execution and operations as well as their overall digital and supporting data strategy.

Discussion

11 Thoughts on "Ask The Chefs: Peer Review Quality"

I’m all on board with traditional peer review unless there is evidence that other mechanisms of evaluation are robust. Don’t fix it if it isn’t broken.
However, peer review is very time-consuming when done well. Depending on the discipline, better for professors to spend their time writing critical review articles that focus on sub-disciplinary swathes of the literature, including (and increasing, especially) in relation to preprints. For those disciplines in which preprints are taking off, what is better: a professor peer-reviewing in one year five journal articles about discrete topics, or writing one critical peer review that provides an open access narrative of an unfolding sub-discipline, as disclosed in preprints (which are OA!)? (Related to this: why not encourage and reward writing of preprints that critically evaluate and also review other preprints? The latter are ideally suited for rapid but deatiled critiques of research results at the frontier of disclosure of new ideas–a de facto peer review.
Arguably, in HEP and perhaps some other heavily quantitative areas, publication of peer reviewed journal articles is an after-thought to arXiv publication. That reflects however a particular academic culture. (Interestingly, Psycarxiv is starting to burgeon, following in the wake of Bioarxiv, but whether publication in those repositories will “suffice” in the way it does in physics communication is anyone’s guess).

P.S. there is a larger context for this emphasis on preprints that I won’t repeat here, on pains of being repetitive.

To expand on Rick’s point about the public use, on peer review, it is worth emphasizing, whether we like it or not, that its use is codified now in our legal system (Daubert v. Dow Chemicals), regulatory system (Data Quality Act and more since), and advisory system (e.g., the IPCC and many more). All of these specify or proscribe the use of and give importance to the “peer-reviewed literature.” This mean that we are caretakers of a system has very important public and societal uses, even if there are challenges still in all these uses, decisions, and acts—there are. These are in addition to the role in science communication. A good example in the legal system is the original Proposition 8 decision (worth a read) but there are many others. This means we have been given a significant societal responsibility and that we should weigh these needs and responsibilities in its evolution. Todd, thanks for raising the many issues with the recent op-ed, a perfect example of where peer review would have helped.

One general point: I love this kind of roundup of different ideas.

I would suggest the public needs more understanding of both peer-review and the scholarly research process writ large. We live in a post-factual world where profound issues–like climate change–are dismissed as a “hoax” by large swaths of the public. This is beyond troubling. I have heard it said that the U.S. Republican party is the only major political entity in the world that rejects climate science. I would submit that this happens in part because they can get away with rejecting and even demonizing science. In the fever swamps of social media, scientists are the ones raking in the money and inventing things. It’s mind boggling but the demonization of scholarship and science is working.

We need more public intellectuals and, in this context, we need to help the public understand how research is done and how it can be properly evaluated by a general reader.

The reasons for this may be more complex and nuanced than the obvious effects. It’s clear enough that there are political motives behind some politicians in the US refuting the human impact on climate change. However, a few renegade scientists are adding to the misinformation. (In a recent interview on CNN, Bill Nye told Chris Cuomo he never realized there were so many second-rate players among scientific academics. He was too civil to mention a current presidential advisor by name.)

More generally, the lack of trust in physics and other “hard” science may be an overflow from social science, which can be far more subjective even than string theory. Many of the bizarre ideas hatched in the social science camp are peer-reviewed. That doesn’t make them accurate or realistic.

This may lead to the public painting all academics with the same brush, thus presuming that physics is just as subjective as the most recently reported social theory.

Somewhere in the discussion should be consideration of the term “peer review,” which as traditionally interpreted means review by equals. But scholarly “peers” are usually not equals. The task of an author who is marginally ahead of his/her field is simply to write clearly and “peers” will likely understand. The task of an author who is streaks ahead of his/her field is different. An entirely different skill-set is needed. From high up on the mountain there must be a tuning in to the mindset of those on the slopes below. As historians have so often related, high above the cloud-line the privileged views of the “Mendels” in our midst can often be lost for decades if not for centuries.

An excellent point, especially in humanistic fields. Very rarely are “peers” anything like equals, or true peers, in terms of interest and specialization.

One of the most satisfying moments in my tenure as an editor was when an author actually thanked me for rejecting his paper! A diligent reviewer had caught a major error that would have caused the author great professional embarrassment. The significantly revised manuscript later passed peer review with flying colors.

Another aspect to consider is how informative the parameters and reports provided by your peer review system are when it comes to providing meaningful feedback about the integrity of your peer review system.

Authors value review as an important exercise because it helps improve the quality of the paper.

Comments are closed.