Editor’s note: Today’s guest post is by Chhavi Chauhan, Director of Scientific Outreach at the American Society for Investigative Pathology (ASIP), and Chirag Jay Patel, Head of Sales and Business Development, Americas at Cactus Communications.

As with many business leaders, publishers face increasingly more uncertain environments, with constantly evolving policies and best practices, and largely people-driven workflows, leading to higher employee turnover, and burned-out authors, reviewers, and staff.

It seems that in the future of peer review, organizations will continue to confront very similar challenges:  timely and effective policy and practice updates, including accountability measures to ensure they actually happen; pressure to update workflows that align with newly designed business models; and an exhausted community of people, forced to do more and more with less and less.

To overcome some of these challenges and address their communities’ evolving needs, in the future, publishers must respond with an equal emphasis on supporting both the people and the machines assisting them. They must strike a balance between using complementary tools to help improve efficiencies and streamline workflows, and to also develop their teams so that they can leverage these tools to their fullest potential.

To better understand how humans and AI can collectively improve peer review, we asked two publishing experts who specialize in human and AI ethical, equitable, and sustainable publishing solutions to share their thoughts on the future of peer review. Chhavi Chauhan is the Director of Scientific Outreach at the American Society for Investigative Pathology (ASIP), and Chirag Jay Patel is the Head of Sales and Business Development, Americas at Cactus Communications. Together, we explored the changing landscape of peer review and focused on practical ways to navigate where we are to get where we want to be in the future.

scale with large pile of rocks on one side, one small rock on the other so it is imbalanced

Understanding Your Role

Prior to making any change, it is important to take some time to identify where you currently sit.  Often we know what our job descriptions say we do, but, as we  incorporate tools designed to augment our work, some self-reflection is necessary in order to fully understand our roles, and how we best interact with the environments we occupy. Here is a glimpse into how our experts have come to understand their roles.

Chirag: My focus is helping authors and publishers improve their manuscripts and prepare them for submission. A big part of what I’ve come to understand about my role is that I am here to help uphold research integrity; and, I’ve done this by helping detect problematic manuscripts early. Another very large part of my role is to help improve the author experience, provide tools to publishers that reinforce research integrity, and encourage the publication of sound research and science.

Chhavi: I’ve come to understand that I must wear many hats. Specifically in the context of the peer review process, with my past training as a scientific researcher, I oversee the scientific editing of all published content to ensure that the impact of published findings is conveyed to the readers appropriately and accurately and that the manuscripts adhere to all parameters of scientific rigor and reproducibility. I am closely plugged into the ASIP Editorial Office and I constantly interface with the editorial office staff to oversee and ensure that all scientific publications conform to international as well as internal ASIP publication standards. Often I serve as an additional peer reviewer, to help elevate the quality and impact of the reported findings by making recommendations to the authors. I also occasionally facilitate decision-making for the Editors-in-Chief (EICs) of the society-managed journals on select articles, when there is a wide range of decisions provided by the peer reviewers and an additional deciding vote is needed. In addition, along with the EICs of the society-managed journals and the Director of Scientific Publications, I address issues pertaining to scientific integrity and misconduct in the published content, if and when they arise.

Kudos to Peer Review!

We hear so often about the things that are broken in peer review. Not enough reviewers, slow turnaround times, and imperfect measures of impact. Rarely do we raise our glasses to the things that are going well, or that have greatly improved in peer review. However, as we move into an environment assisted by AI, being aware of what is going well and what we should celebrate enhances our motivation to get things done. Here are a few things that our experts are excited about in the future of peer review.

Chirag: I believe peer review is having a bit of a renaissance, especially with the growth of papermills, predatory publishing, compromised peer review, and a growing list of retractions. While it is not perfect, peer review is one of the best processes we have to reduce the amount of junk science that gets published!

Peer review has also changed over the years. It is no longer conducted by a select group of educated men who discuss and critique papers. For many publishers, peer review is much more diverse than it ever has been, peer reviewers come from many different countries; and, although we are still striving to eliminate gender disparities that exist in some fields, more women than in the past are peer reviewers. I am also excited about efforts to include more early-career researchers.

While single-anonymous peer review is the most common, it is no longer the only flavor of peer review. Here are a few that I’ve found to be pretty exciting:

  1. Double-anonymous peer review (both the reviewers and the authors are anonymous to each other)
  2. Triple-anonymous peer review (author(s), reviewer(s) and editor(s) all remain anonymous from each other)
  3. Open peer review (this peer review style has a ton of flavors, but here I’ll just mention the version where the identities of both the authors and the reviewers are disclosed)
  4. Post-publication peer review (occurs after a paper has been published)
  5. Crowdsourced peer review (involves a broader audience, including non-experts and the public)

It is great to see journals and publishers experimenting with new types of peer review to improve the author experience, speed up publication times, and provide transparency.

Lastly, as an advocate for humans and AI, I am thrilled to see that technology is also playing a bigger role in the peer review process! AI was already being used for plagiarism detection and now its use has grown to cover many other areas of the publishing process, for example, to evaluate manuscripts for quality, language editing, and formatting; check images for manipulation; spot papermill submissions; identify peer reviewers; extract metadata from papers to reduce errors in data entry; identify citations of retracted papers; and trend identification. These are exciting developments since before AI, these steps in the publication process took up a lot of time. By leveraging AI, authors can improve the quality of their research and increase their productivity. For editors, AI can provide them more time to focus on decision-making, provide authors more guidance and support, help with content selection, and guide their publication in a strategic direction.

Chhavi: Since I transitioned into the scholarly publishing industry 12 years ago, I have realized that this is a (rather slowly but constantly) shifting domain that always strives to adjust to the needs and demands of all stakeholders and external entities including the researchers, authors, societies, publishers, available market tools and emerging trends, regional and international publishing mandates, grants, etc.

What I am most excited about in the peer review process is the renewed focus on making it more efficient, engaging, and inclusive for all authors and editorial staff, as well as peer reviewers, by leveraging innovative transformative technologies like AI to simplify submissions, decrease article turn-around times, streamline workflows, and provide equitable opportunities, especially for folks with language barriers or from resource limited settings.

As publishers begin embracing AI, they are also increasing the bandwidth of scholarly publishing professionals to institute human-led, forward-thinking approaches to proactively train more young career professionals to be extraordinary peer reviewers; and to institute policies to engage more diverse reviewers, to make the process more balanced while decreasing the burn-out in overused enthusiastic reviewers. AI can also be used to ensure that equitable practices are in place to increase representation in Editorial Boards, and to focus on dedicated initiatives and efforts to solicit high-quality content from minority communities that struggle to make the impact that they deserve.

I am excited about the industry taking a more holistic approach to quality content solicitation and synthesis — seamless and engaging peer review focusing on scientific integrity, rigor, and reproducibility; efficient content processing to expedite its discovery; and effective content dissemination for broader and more immediate impact.

We are currently living in the generative AI wave, but the interactive AI wave is fast headed our way. I strongly believe that this new AI revolution will change the way we envision the consumption of scholarly content in the future. Our personal AI assistants will be capturing the essence of our interactions: generating summaries from the talks at conferences we attend; consolidating information from concurrent sessions via session recordings to help us identify content we may want to review more deeply; finding matches between the content and specific scopes of the journals to prompt us to solicit specific content to feature in our journals; and so on. These personal assistants will also dictate shifts in the peer review process where interactions may happen between sets of virtual assistants (journal assistants, author assistants, you get the idea). A new and exciting challenge will be to mindfully navigate the space to continue to engage the new breed of tech-savvy authors, peer reviewers, and readers by adopting new tools — while maintaining human-led practices to continue to engage seasoned stakeholders who value human interaction. I am eagerly looking forward to seeing how these interactions shape up to renew the way we currently approach peer review.

Areas to Approach with Caution

We are excited to see and experience all the progress that has occurred in peer review, but we are also aware that there is still room for improvement. If celebrating our victories is how we build and increase motivation, identifying problem areas is equally important to understanding what we need to focus on to get the job done. As we lean into integrating new tools and systems into the processes of peer review, there are a few things to be on the lookout for.  Here are some suggestions that our experts think should be taken into consideration as we prepare our current spaces for the future.

Chirag: Even with the use of AI there are still not enough qualified peer reviewers in the world to meet demand. Therefore, we should be careful to consider that, as submission volumes increase, the current pool of peer reviewers will be even more overburdened and under greater pressure to submit reviews quickly to help reduce publication times. We also need a better way to recognize the work performed by reviewers and reward them; this is important to retain and recruit expert reviewers. While reviewers are more diverse today than ever before we still need to address biases with gender, geography, and institution.

Chhavi: With great power comes great responsibility. There is no doubt that AI is a powerful tool that will (eventually) reshape how we approach peer review in the future. However, it is largely in our hands to stay focused on ensuring that AI is ethically and responsibly incorporated into our workflows to improve efficiencies so that it becomes a more transformative force instead of turning into a disruptive force.  There is an urgent need to update policies, institute guardrails, establish ethical frameworks guided by human needs and oversight, and, above all, take a holistic approach to understanding not just the intricacies, but also the big picture — so that we embrace AI in peer review for the benefit of all stakeholders, while maintaining scientific integrity. We also need to generate awareness among all stakeholders to ensure that this transition happens ethically and responsibly. For example, it would be a travesty to lose organizational or publication prestige through the blind use of large language models (LLMs) like ChatGPT for generic content synthesis; compromised peer review that results in users accidentally sharing novel findings globally before publication, and appropriate attribution; exacerbation of existing inequities by limiting opportunities for folks in resource limited settings; creation of new inequities with an institution of regional mandates and regulations; publication of inadequately vetted falsified paper-mill generated content; false author and reader claims from non-existent “individuals”; rampant AI-enabled image-manipulation that can weaken the fabric of scientific rigor and reproducibility while laying the foundation for more bogus “discoveries. The list of risks is endless, depressing, and alarming!

Skills Needed to Pay the Bills

If you’ve skipped the step that aids with motivation, this next suggestion might be very difficult to achieve. In order to effectively utilize some of the benefits that AI might bring, it is necessary to ensure the people behind the machines are both qualified to teach them and motivated to learn how to use them. In the future of peer review, upskilling will be a critical component of success. Here are a few capabilities and skills our experts believe publishers will need in the future to get some of these new jobs done.

Chirag: The first skill that will be required in the future of peer review will be prompt engineering. It is important to know the right questions to ask an LLM so you can get the (correct) answer you need quickly. Other useful skills will be deciphering what LLMs say and spotting AI-generated content. AI will manage most of this work but there will still be a need for skilled editors since AI is not 100% reliable. Digital, social media, and UX capabilities will grow in importance as competition for readers increases. As more and more content becomes Open Access, with authors the clients, it will be vital that publishers are able to attract and keep the best of the best talent — this is where experienced editors will become more valuable.

Chhavi: In my opinion, historically, our industry has largely depended on “attention to detail.” Besides subject matter expertise, attention to detail will continue to be the most needed skill going forward as well. As we continue to embrace AI, to ensure this integration is needed, appropriate, sustainable, and scalable, we will have to rely heavily on human stakeholders, so that checks and balances are adequately followed to minimize disruption while maximizing the positive impact from the upcoming transformation. New waves of AI tools like LLMs will be constant. By rooting ourselves deep in our values and subject matter expertise, we will not only be able to withstand the deluge of new products, tools, and processes but also be able to withstand the tsunami that they may bring about, should we not take a holistic approach to ensure they are transformative in positive ways. It will be worthwhile to keep our fingers on the pulse and to understand the short-term as well as long-term utilities of new tools and capabilities, and to identify the skills and strengths that will be long-lasting. These include human subject matter expertise, the ability to creatively think to transform for the better, attention to detail, empathy to engage with all stakeholders, and agility and problem solving for both the intended and the unintended.

Career Evolution is on the Horizon

As we discuss more and more ways to integrate AI into how we work, you might think that there will be less and less for our people to do.  But that is just not the case.  If anything, as AI works to take on tasks that it is suited for, our people should be on the lookout for an evolution in their roles and career options.  Here are a few ways our experts see jobs growing, changing, and evolving.

Chirag: Every stakeholder in the publishing industry will need to adapt to AI and find ways to use it to augment their work and enhance their capabilities. Ignoring AI will not be an option because it will become integrated with every step of the publishing process. Very much like how the migration to digital publishing and the internet created new roles across publishing, the growing use of AI will lead to new roles for people who know how to leverage it to deliver results. AI is not (yet) at a point where publishers can rely on it fully, so there will be a need for people who can fine tune algorithms; verify, validate, contextualize; build relationships; and communicate content. In short, embracing AI and learning how to work with it effectively will be key in this changing landscape of publishing.

Chhavi: One thing that I have realized in my scholarly publishing career so far is that “change is the only constant.” In my short career, I have seen (slow though eventually drastic) changes in the management of peer review, with many evolving models that have their own merits and weaknesses. There has been a revolution in the dissemination of peer-reviewed content from print to online, to hybrid and open access, that directly affects its impact and utility, as well as dictating how future content is solicited and synthesized. Though my own career, responsibilities, and practices have pivoted over time to keep up with the emerging needs and demands of the constantly evolving peer review process, these changes have been slow enough (so far) for me to adapt to the shifting sands.

The three big challenges ahead of us seem to be: the ability to make high-quality, peer-reviewed content open access yet sustainable for all stakeholders: switching publication models to adhere to regional and international funder constraints: and embracing emerging tools and technologies like AI holistically so that they become positively transformative instead of negatively disruptive!

I do not envision any immediate shifts in the peer review career paths. However, I am personally a constant learner and I strongly urge all readers to continue to learn about new tools and explore their utilities. It is critical for us to stay agile and pivot as needed; to upskill ourselves to responsibly leverage transformative technologies to our advantage; to think innovatively to leverage these transformative tools to create new opportunities; and to learn new tricks of the trade that come with these tools in times of disruption, so that with our knowledge, experience, skills, and intellect we continue to stay relevant and visible.

 

Thank you, Chhavi and Charig! We appreciate our experts for lending their thoughts on the future of peer review.  We want to welcome our audience to join the conversation.  What are your thoughts on the future of careers and skills that will be needed in peer review and publishing?  

Chhavi Chauhan

Chhavi Chauhan, Ph.D. (She/Her) is Director of Scientific Outreach at the American Society for Investigative Pathology (ASIP) and Director of the Continuing Medical Education (CME) Program at the Journal of Molecular Diagnostics. She is a past Co-Chair of the Diversity, Equity, Inclusion, & Accessibility (DEIA) Committee of the Society for Scholarly Publishing (SSP) and a current staff member of the Committee for Equal Representation and Opportunities (CERO) at ASIP.

Discussion

8 Thoughts on "Guest Post — Striking a Balance: Humans and Machines in the Future of Peer Review and Publishing"

Very useful exchange of thoughts on an ever evolving field with good recommendations.

I am so glad that you found the exchange of ideas and our perspectives and recommendations valuable. Thank you for reading the post and sharing your positive feedback, Dr. Rale!

Thank you for your thoughts, I note both of your focus on various aspects of rigor and reproducibility but neither of you mentioned how poor this is in the published literature. Nearly every study of this, including ours, concludes only one thing that whatever you measure there is a lot of room for improvement. While most authors / reviewers / editors certainly do pay attention to aspects of rigor they are not very good at finding what is wrong and for many aspects of a study so few manuscripts address them that even the most dedicated readers will experience fatigue.

This is where I see rigor improvement because AI will not experience review fatigue if it sees no blinding reported in each and every study.

You are welcome, Anita, and thank you for your thought-provoking comments.

I completely agree with you that rigor and reproducibility in peer-review are oftentimes overlooked; the number of corrections issued by the Journals, and the unfortunate rate of retractions are a testimony to this statement. I also agree with you that there is an immense opportunity to (mindfully, intentionally, and responsibly) leverage AI with human (expert)-in-the-loop to make significant progress to further enhance rigor and reproducibility.

As the number of supplementary figures with wide range of supporting data increases (oftentimes for very prestigious journals) as well as the amount of data presented in the articles themselves, especially for journals reporting high-throughput/microarray/informatics kind of results, it is highly unlikely that any human reviewer will ever capture the whole depth of data reported and attest to its credibility.

There is most certainly opportunity to leverage AI for rigor improvement and some emerging organizations have in fact started experimenting with these; though some information remains proprietary at this point. You may like to read this article published in The Chronicle of Higher Education today: “We’re All Using It’: Publishing Decisions Are Increasingly Aided by AI. That’s Not Always Obvious.”

I’ve heard a lot about how AI and LLMs will provide more equitable access to publishing for authors with limited resources or who don’t speak English as a primary language. However, I’m interested in the details of how exactly that will happen. Not everyone who speaks English as an additional language will have the same barriers (or even make the same language errors depending on their primary language), and “limited resources” can mean a lot of things. One could even question how equitable it is to further enforce Standard American English as a publishing default through AI. I don’t even know how well AI and LLMs perform in languages that aren’t English.
I’m not against authors using AI-supported tools, but I’d like to know what challenges and trade-offs they may encounter in their use.
Can the guest authors (or anyone else) recommend additional resources about this topic?

Dear, Crystal,

You are absolutely right about the widespread buzz about leveraging AI to make publishing more equitable and accessible, especially for the authors in resource limiting settings as well as the non-English speaking authors. You accurately point out that “limited resources” may mean many different things in various contexts and that can be a focused discussion in itself.

Though there are several regional as well as publisher-specific and vendor-specific LLMs and AI-tools in the works, the vast majority of original big players and models leveraged English as the main language. However, there have been vast amount of improvements in leveraging language translations in real time using these models. Many publisher and partners are in fact testing these tools to make content more accessible to folks in different parts of the world. The biggest challenge so far has been the accuracy of these translations, which is now improving, but should still rely on human language expert for full vetting before being made available. The vast majority of publishers stayed away from providing language translation services in the past due to associated costs for smaller markets. However, adoption of reliable AI will minimize these costs in the future opening up these smaller markets for better participation.

The one trade-off that immediately comes to mind may be the inherent biases in datasets used for training the AI models. If we start looking at publications from a particular region/set of institutions, AI can unfairly skew results to assign positive outcomes for submission, acceptance, etc. from authors or institutes widely represented in the training datasets and negatively assign negative outcomes to new authors, authors from less renowned academic institutions (despite great work), authors from minoritized communities, etc.

Without promoting any tools, I am sharing some publicly available resources: https://www.elsevier.com/solutions provides a list of Elsevier’s digital solutions with a note regarding market-specific solutions. Here are some author specific tools & resources: https://beta.elsevier.com/researcher/author/tools-and-resources?trial=true Wiley Partner Solutions has a similar list: https://www.wiley.com/en-us/business/partner-solutions/solutions#floating-menu__list-1 Essentially the services supported by the proprietary tools that are in testing phases currently should be forthcoming.

Hope this helps.

Thanks for this compilation of view regarding the potential impact of AI on peer review.
As one of the risks it is mentioned that users (authors, reviewers, editors) may accidentally share novel findings globally before publication.

I would like to point out to a recently published (initial) guideline by the DFG (German Research Foundation) for dealing with generative AI.

https://www.dfg.de/download/pdf/dfg_im_profil/geschaeftsstelle/publikationen/stellungnahmen_papiere/2023/230921_statement_executive_committee_ki_ai.pdf

According to those guidelines, the use of such AI tools is inadmissible in the preparation of reviews due to the confidentiality of assessment process: “Documents provided for review are confidential and in particular may not be used as input for generative models.” In my view, this applies to peer review, too. But who is responsible for ensuring that submitted manuscripts are not fed into AI systems, whereby they are normally also used to train LLMs?

You are very welcome, Georg, and thank you for your thoughtful comment as well as for sharing the recently published guidelines by the DFG.

I completely agree with you that the said guidelines do and should apply to peer review as well. There is a general consensus about not using submitted manuscripts as input for generative models. However, reviewer awareness may vary in different institutions and regions and in the absence of guiding frameworks and appropriate checks and balances, one cannot fully eliminate this possibility (accidental or intentional.)

I would think that the societies and publishers overseeing the peer review process should take the responsibility to ensure that submitted (unpublished) work is not fed into AI systems for training LLMs; however, it is likely, and also my current understanding, that big publishers are internally testing proprietary AI tools including LLMs to improve thier performance for scholarly content. I believe it makes sense to do so using a sandbox approach to maintain trust, novelty, and integrity of the unpublished content.

There has been a push towards using AI tools to improve rigor and reproducibility, to mitigate scientific misconduct, and to potentially combat the deluge of papermill articles that can transiently paralyze any editorial workflow. I would imagine that a robust and reliable AI tool that was trained on recent quality content will only have a capability to help resolve these issues, and hence the need to train these models on recent if not real-time quality content is also imminent to remain ahead of the (malicious) curve.

Comments are closed.