Editor’s Note: Today’s post is by Avi Staiman. Avi is the founder and CEO of Academic Language Experts, a company dedicated to assisting academic scholars to prepare their research for publication and bring it to the world.

Peer review can be an opportunity for authors to rethink and refine their arguments. For example, vaccines developed by leading pharmaceutical companies to combat Coronavirus were scrutinized by scientists the world over using the peer review process to ensure the veracity of the research and to suggest tweaks and improvements. Trust in the scientific review process in general, and peer review specifically, was put to the test and much of the public debate hinged on the reliability and transparency of the expedited review process.

Peer review of academic articles is not without its numerous faults and it has come under attack on multiple fronts in recent years. Some critics point to peer review missing fundamental flaws in published papers, resulting in problematic research that requires retraction, while others take issue with the fact that peer review can lead to biased results. Additional issues include the fact that peer review tends to be slow and expensive with reviewers generally remaining unpaid. Furthermore, in many cases, reviewers are far less experienced researchers than the scholars they are reviewing.

These legitimate critiques generally fall short of recommending doing away with peer review altogether in lieu of a preferred alternative. As such, peer review continues to be the central process around which scientific knowledge is explored, critiqued, scrutinized, and refined.

The words Make Things Better on wooden blocks in the shape of a staircase.

In my experience as the CEO of an author services company, I encounter dozens of these reviews on a weekly basis and I am troubled by what I perceive as a steady decline in the pervading culture and attitude shown in these reviews, specifically for journal submissions. It seems that many reviewers see their primary role as deflating the arguments and methodologies of the manuscripts they receive, often without any concern for the way the author will receive the comments or whether the critique can be addressed and revised.

Rather than being an educational process that can help scholars improve their research and catch errors, peer review has become an arena where it is acceptable to critique and tear down others. For example, Adriana Romero-Olivares, professor of microbial ecology at New Mexico State University, details the demoralizing experience she had when her reviewers went into great detail to criticize the language and writing style of her article, without even bothering to address the underlying scientific findings. A Facebook group named ‘Reviewer 2 Must be Stopped!’ has tens of thousands of members who share their poor, and at times, outrageous review experiences.

Authors, meanwhile, are reverting into ‘fight or flight’ mode by responding in kind to reviewers or shelving the research altogether. Authors feel ‘battered’ rather than ‘bettered’ in their reviews and respond either by becoming intransigent in their approach or by beginning to doubt the quality of their own research and even themselves as researchers. These authors feel the sting of reviewers who dismiss or reject their manuscript, seemingly without evincing any responsibility to explain how they would have done things differently or what the author can do to improve their research in light of the critique.

Collateral damage of broken peer review

The anonymous review system makes it difficult to conduct constructive dialogue and authors are left feeling helpless and frustrated, not knowing what reviewers want from them or how they can improve their research. This is especially prevalent among more junior scholars who have less experience handling rejection and figuring out how to absorb and dissect the reviewer’s comments (on both a cognitive and emotional level). Sometimes simply trying to decipher reviewer comments can prove to be a herculean challenge. This is especially true for articles written by EAL (English as an Additional Language) authors whose methods and writing conventions likely differ from those of the reviewer.

Reviews that are especially mean or critical, without providing tangible points for improvement and growth, can lead scholars to bury their research on their laptops, not daring to send it off to another journal for a second attempt at publication. In an ironic twist, some of the world’s largest publishers, who are desperate to grow their journal collection as quickly as possible, end up turning scholars off, dissuading them from submitting and publishing their work due to the frustration of the review process.

It is of course important to qualify that reviews differ from scholar to scholar, and some reviewers do try and take an educational approach to help colleagues understand what they can do to improve their work. It is also important to note that some reviewers see their work as ‘volunteering’ and the amount of time they invest in thoughtful feedback may be a reflection in the degree of priority given to such reviews. Many journal editors take great care to sift through reviews and remove any demeaning or otherwise irrelevant critiques. However, the global reach of digital journals and the move towards open access has incentivized publishers to launch more and more journals and publish more and more articles as quickly as possible, putting editors under ever increasing pressure and limiting their ability to conduct proper oversight.

The critique-heavy approach extends beyond peer review and permeates many other areas of academic life as well. Research students quickly learn that scholars are expected to trade barbs with one another throughout their careers. Resulting feelings of loneliness and isolation that follow should come as no surprise. We must ask ourselves what culture and attitude critical review instills within us as scholars and whether it encourages us to be at our best, or perhaps inhibits us from growing as professionals and improving our work? Are we truly a ‘scholarly community’ or have we learned to pull ourselves up by dragging others down?

Constructive critique benefits science

To be clear: I do not propose making the peer review process any less rigorous, nor do I think that reviewers need to couch their critique in unnecessarily complimentary language to avoid hurting anyone’s feelings. Being able to receive feedback and criticism and knowing how to digest rejection is an essential part of the academic maturation process. In fact, research that is rejected can end up being more successful than other research that is accepted straight away.

I believe that peer review needs to extend beyond a formulaic system for sifting through and picking out the ‘best’ or ‘most innovative’ research from the rest of the pack. Rather, it should be seen as an educational process whereby reviewers help authors to sharpen their methods, refine their arguments, and consider alternative perspectives.

This view of peer review is not without precedent. The earliest known form of what we commonly refer to as peer review dates back to the 9th-century. Ishāq ibn ʻAlī al-Ruhāwī, author of Ethics of the Physician, proscribed that following patient visits, doctors should write notes that would then be reviewed by a council of medical professionals to determine the quality of care provided. The feedback would then be shared with the doctor in an attempt to help the doctor to reflect and improve.

So long as researchers retain their role as educators, it is our obligation and responsibility to not only ‘batter’ our peers but also give direction for ‘bettering’. In other words, peer review has the potential to be transformed from a toxic, destructive experience into a constructive one. But how can we go about doing that?

An educational approach to peer review

A format that might be appealing, not only to authors and reviewers but to publishers as well, takes a page out of the work of HSS book publishers and how they review manuscripts.

One of the main differences between the STEM journal and HSS book submission process is that book acquisitions editors get involved in the process before the manuscript is complete (and sometimes before there is a manuscript at all). This process starts at a relatively early stage in the writing process, creating a situation whereby editors are incentivized to help authors and sign them up before other publishers can swoop in and publish it themselves. Consider the potential parallels with the increasing use of journal preprints, as a place where journal editors could hop in and start the process of working with authors at an early stage in the process. (It may also help that you can submit book proposals to multiple publishers simultaneously).

These editors are generally people who love new ideas and books and love helping authors maximize their manuscript’s latent potential. The back and forth between author and acquisitions editor tends to be a positive and constructive one, resulting in a manuscript that is greatly improved as a result of a ‘meeting of the minds’. In contrast, STEM journal editors tend to be relegated to more of a gatekeeping role and their involvement in the improvement of the research is limited.

If editors’ time was freed up to work with a cohort of scholars with strong underlying research that needed direction and focus, the resulting research could be vastly improved. Publishers looking to increase their research output could develop a reputation among authors as being helpful and constructive (words not generally used when discussing academic publishers), incentivizing authors to submit their research in a place where it would be given attention and care. This give and take could also help assuage scholars’ fears of making embarrassing mistakes and minimizing risk of future retraction. While each article may not receive the suggested 130 hours of professional expertise shown by some HSS journals, creating an editorial culture where the role of the editor is less about finding reasons to reject a manuscript, and more about finding ways to improve it would pay dividends to publishers in an era where author experience is an increasingly important differentiator between journals. Since business models are increasingly based on the volume of good papers a journal publishes, activities that can potentially increase submissions and the number of resulting high-quality papers published could potentially cover the associated costs of those extra efforts.

Conclusion

By making education and the improvement of the scholarly community the center of our peer reviews efforts, we would be doing a service not only to scholars on both sides of the review process but to the general public who funds and benefits from the resulting findings. If we can transform dialogue between author and reviewer into a constructive and positive one, it would help authors improve their research and give them a better shot at surviving the reviewers’ critique. If we reframe the peer review process for reviewers as helping authors improve, there might be less of a feeling of exploitation and more willingness to give serious time, thought, and consideration to their own suggestions. And finally (maybe most importantly), the resulting science would be improved giving people access to knowledge based more on facts and the scientific method and less on ego, pride, reputation and self-worth.

There would still be plenty of rejections from reviewers, but the role of the reviewer would primarily be suggesting constructive improvements and recommending publication, not identifying flaws or ad-hominem attacks. I think any journal that took such initiative and offered such service would quickly become a magnet for authors. Let’s reject the ‘battering’ peer review process and see if we can’t transform it into ‘bettering review’ instead.

Avi Staiman

Avi Staiman

Avi Staiman is the founder and CEO of Academic Language Experts, a company dedicated to empowering English as an Additional Language authors to elevate their research for publication and bring it to the world. Avi is a core member of CANGARU, where he represents EASE in creating legislation and policy for the responsible use of AI in research. He also is the co-host of the New Books Network 'Scholarly Communication' Podcast.

Discussion

31 Thoughts on "Guest Post — Has Peer Review Created a Toxic Culture in Academia? Moving from ‘Battering’ to ‘Bettering’ in the Review of Academic Research"

Since I just asked on Twitter last night what comes next if/when the peer review system fails, I read this with great interest. Thank you.

I’d like to understand more about this claim as it seems a bit more narrow than I see elsewhere: “Since business models are increasingly based on the volume of good papers a journal publishes, activities that can potentially increase submissions and the number of resulting high-quality papers published could potentially cover the associated costs of those extra efforts.” I see elsewhere statements that business models are now based on / moving in the direction of the volume of papers, sans the qualifier “good” …

The APC/author-pays model indeed does incentivize article volume as the route to financial success, as one gets paid for every article published. But there still remains value in journal reputation — to get those articles in (and published and paid for), the journal has to be an attractive place for authors. All things being equal, why send your article to journal X instead of journal Y? Authors want their work to be published in a reputable journal, they want the cachet of the journal’s reputation to enhance their own, and they want their papers to be read (which is more likely in a journal the community respects than in one that is largely disregarded as a trash heap). Hence, a high volume of high quality papers would be the ideal position for any journal under current OA business models (see Nature Communications as an example).

What is ideal is not always what is increasingly the case. I’m interested in the empirical claim. 🙂

I guess I’m not clear on what you’re asking — is it that the business model favors only volume, rather than volume and quality? One would think that it favors both, given that you can charge a higher APC based on reputation (and APCs do seem to correlate with JIF scores for whatever that’s worth). All else being equal, wouldn’t a journal publishing 5000 articles at a $5000 APC be more profitable than one with the same volume charging a $1500 APC?

I’m asking what the evidence is that business models are increasingly based on volume of good articles. Rather than just volume of articles.

I tend to see this post as perhaps more aspirational than empirical, and as noted in another comment, have my doubts about the financial balances the suggestions would require.

As far as empirical evidence for that claim, I know of no publisher that makes its strategic plans publicly available, so it’s hard to point you to a link, though in my work as a consultant, I have yet to work with any society or publisher that is okay with abandoning rigor and quality solely in favor of quantity (small sample sizes noted). Though there are some publishers that embody that practice in reality, I suspect that not a single one would actively choose to publish the same quantity of poor quality articles rather than high quality articles. It is perhaps also worth noting the number of OA journals that tout their Impact Factor or other quality measures, that actively market the rigor of their review process to authors, and that promote and market published articles to drive citations and recognition of the quality of those articles. If quantity was all that mattered, none of that would happen.

I don’t think the trade off in question for a publisher would be between the same number of poor and good (of course they’d choose good) … the trade off in question, it seems to me, is poor for none/fewer articles at all.

Sure, but doesn’t that mean that the model does indeed favor quantity of quality articles, and that some are forced to settle for a less than ideal business outcome?

I do see a difference between a claim about an abstracted model and models as implemented. Hence my interest in the empirical statement about the direction of travel in implementation.

I would take the statement as more the former than the latter. And I might argue that implementation increasingly strives toward that ideal (as more and more societies are launching OA journals for example), but execution varies significantly.

Hi Lisa,

Thanks for your thoughtful feedback and comments in helping me sharpen my argument.

I was really trying to make an aspirational argument and wasn’t making an empirical claim (as this would require considerable insider information that none of us have). I would argue, however, that 1. Investment in more professional developmental review and feedback could be done at reasonable costs and publishers would see clear returns in terms of reputational payoff, and 2. Authors are very skeptical of publishers and frustrated by cumbersome submission processes. If a publisher invested in their user experience and constructive approach to authors with potential, they could dramatically increase the number of submissions they receive and articles they could publish (regardless of quality). I hope this helps clarify, thanks!

Thank you for these additional thoughts. As someone who received a “reviewer #2” response just this week, I certainly resonate with your bettering not battering framework. Fortunately, I’m experienced and senior enough to know how to sort that through, deal with it in the reply to editor, etc. But, that I can do so doesn’t mean I should have to! 🙂

And let’s not forget about the more junior scholars who don’t know how to handle such comments and may take it as a sign that their research isn’t worthwhile or even that they aren’t in the right profession. I also think that much is lost in translation with EAL (English as an Additional Language) scholars. In my experience, it is sometimes harder for them to ‘read between the lines’ and grasp cultural nuances in the review. This reaches a point of absurd when scholars don’t even know if their article was given heavy feedback or simply rejected.

P.s. I don’t know which would be more profitable in your example without knowing the expenses. Greater revenue, yes. Greater profit, not enough information to say.

Right, that’s why I said “all things being equal”. I’m with you in that I have my doubts about how feasible such interventions are, and they are part of the long list of increased demands being made by the community that would drive up costs while at the same time the community is calling for price reductions (you’ve recently discussed similar ones regarding Plan S transparency declarations as well).

Yes, “in many cases, reviewers are far less experienced researchers than the scholars they are reviewing.” In other words, it is difficult for journal editors to find real “peers.” Fine for marketing to the public that work has been properly evaluated, but basically deceptive. Scholarly Kitchen has seen my past explorations of the NCBI’s “PubMed Commons” solution – which was deemed a failed “experiment” just as it was getting through the initial 5-years of teething problems.

Donald, I would be curious to see data on who completes reviews and their background. If the reviewers are often less qualified than the author of the piece, the review can still have merit but may not be a good indicator of whether the article contributes to the field and should be accepted or not.

Matching reviewers to a paper is at the heart of the problem. At the very least, post-publication reviews in PubMed Commons (which unlike PubPeer was not open to all for comments), sometimes brought to editors’ attentions non-high profile folk with deep expertise.

I wonder if peer review attitude varies by domain or if anyone has looked at this quantitatively. I’m an AE for an applied environmental science journal and see a lot of reviews. I’d say >90% of reviews are in the ‘bettering’ category, around 5% are in the positive but insufficient category, and maybe 5% are in the battering category. Granted, for an author, receiving a battering review probably sticks in the memory more than 5 bettering reviews.

Hi Chris,

I am glad to hear and if my experience reflected yours, I don’t think I would have set out to write this piece. Unfortunately, I see a lot of what I deem as inappropriate reviews and many scholars who struggle with responding to such reviews. There are also reviews that are appropriate but leave authors at a loss for how to move forward. This may have to do with the specific audience I work with (ESL scholars) who seem to receive a disproportionate number of such reviews. I have a few theories as to why this may be and might write this up as a separate post.

Regarding STEM book publishing and editor involvement. Most book acquiring editors are familiar with the market and not the science and few get involved in the penning of a manuscript. They use reviewers and usually only ask for a review of the proposal and not the raw manuscript. The publisher will review the submitted manuscript for grammar, style, usage and continuity of thought.
Regarding English as a second language authors. This is indeed a problem. But, it is the author’s responsibility to assure that the audience understands what is being conveyed. I realize the aforementioned is a hard statement. But, before submitting an article for review, the author should pass it by someone proficient in the language the author plans to submit it to.

Hi Harvey,

I think your demand on ESL scholars to ensure that the author passes on their manuscript to someone proficient in the language is easier said than done. For scholars from low-income countries, professional services can be prohibitive and others will likely not understand their research to be able to properly help. If publishing was a one-off activity, I would tend to agree with you but if we require continuous publications every year, I believe it is the responsibility of publishers to support scholars without resources as they are profiting from the benefits of said research.

I agree wholeheartedly with you, Avi Staiman. It’s time to stop breaking people who could be great authors. I work with lots of multilingual researcher-authors, and I see the damage that the current STM peer-review system does to so many of them.

Thanks Theo. I hadn’t seen this article before and am glad to see that the argument has been made by others far more qualified than I. I think if the reviewers were employed by the publishers (as in the acquisitions editor model) then more serious training and oversight could be achieved.

Hi Avi, enjoyed your posting in The Scholarly Kitchen. I think you are absolutely correct about the changing focus from “bettering to battering.” No doubt this is a result of reviewer fatigue and the dramatic increase in submissions globally and an overall lack of education. I started my career as a books acquisitions editor working closely with authors. Later, I was fortunate to work with journal editors who spent time in nurturing good reviews and authors. They would rate the quality of a review and provide feedback. Sadly, those times are gone.

It might be interesting to propose a curriculum for reviewers, perhaps in a presentation at a meeting such as SSP.

Thanks Rebecca. What would be your initial thoughts as to what such a curriculum for reviewers would include?

While I agree that the “bettering” is a nice goal of the review process, I have to admit that I see many submission where the research lay out has simply been poorly planned and especially if the results are not according to expectation, I have seen many papers where the authors try as they might to twist the interpretation in the sens their supervisor has foreseen in the research grant proposal. So in my eyes peer review is flawed by the fact that in certain countries Ph.D. students have to have a certain number of publications to be admitted to defending their Ph.D.. Yet if at the end of the 3 or 4 years the results are not there because the goal of the project was, in the worst case, constructed on other published research where results have been interpreted in a “wishful thinking” perspective, in the good case, ambitious but not successful, one should recognize that the student has been working according to good scientific practices, yet that the starting hypothesis has not been confirmed. That highlights another problem: Publishing is always supposed to be “positive” i.e. producing “good” results. Reporting failure, and the detailed ways how one has tried to solve issues and how verification of practice has been and so on is just nothing a publisher would be interested in, because it is less likely to be cited. Yet I’m convinced that if we had a culture of also sharing all the things that didn’t bake out, and why we think they didn’t bake out, we could save billions of needlessly spent research grants. Before I get carried away, what I wanted to share is this: Sometimes, if bettering is not possible, at least not by just adding some information and modifying the argument, kind battering is necessary because otherwise the paper will make it through review and will be published. In order to prevent scholars new to the field being fooled by papers that are not sound this is what a responsible reviewer must do.
There is another way of solving this last issue: associate in the digital world a thread to each paper where scholars are free to comment, call in question, criticize or suggest improvements to a paper, yet with their details disclosed (e.g. ORCID number or similar), such that you have to think twice if you want to point out a flaw by putting to risk your own reputation if you’re wrong. On the other hand, if you are sure about your argument then this provides a real help to other scholars to at least think about it and then make their choices.

“There is another way of solving this last issue … .” Yes, accredited, non-anonymous, scholars, who may not have been involved in the first round of reviewing, get into the act (post-publication review). Sounds much like the PubMed Commons “failed experiment” (see above).

” If we can transform dialogue between author and reviewer into a constructive and positive one, it would help authors improve their research and give them a better shot at surviving the reviewers’ critique.” I’m all in for a better world. However, my own experience is that many authors do not even try to see that the reviewers comment meant to improve their paper, a free advice to make them look better. They just want to get over with it at minimal investment and turn the page. This is in part a consequence of the “length of publication list” and “publish or perish” mantra the scientific community has adopted without giving it a second thought.

Comments are closed.