Editor’s note: Today’s post is by Hong Zhou with support from Sylvia Izzo Hunter. Hong is Director of Intelligent Services and Head of AI R&D for Atypon, part of Wiley Partner Solutions, where he is responsible for overseeing the implementation of artificial intelligence–driven information discovery technologies. Sylvia is Manager, Product Marketing, Community & Content, at Wiley Partner Solutions.

In our last post, we gave an overview of how intelligent services can leverage artificial intelligence (AI), big data, and cloud computing to facilitate digital transformation in the scholarly publishing journey. Today’s post focuses on the submission and review stage, and explains how digital transformation happens and how intelligent services can enable trustable, transparent, and efficient submission and review.

Hand pointing to AI symbol, digital head with sound bubbles

Challenge and change in submission and review

Peer review is fundamental to the development and publication of any research paper. It’s also a process that poses challenges for both publishers/societies and researchers—and faces challenges to its value and integrity.

For publishers, scalability and cost are the obvious challenges. Current workflows and processes cannot cope with increasing submission volumes and are not easily or cost-effectively scalable. Open access (OA) publishing, in particular, is growing much faster than the underlying market, with outputs estimated as increasing 12.5% (compound annual growth rate) from 2019 to 2022. At the same time, the difficulty of recruiting peer reviewers and journal editors continues to be a pain point, exacerbated by high academic workloads and pandemic burnout.

In addition, however, the growing number of articles retracted for research misconduct is casting a shadow on the reputation of research worldwide. Data falsification, image manipulation, results issues, publication duplication, plagiarism, and fake peer review are the main forms of misconduct that trigger retraction. Increasing article volume requires publishers to devote greater resources and effort to screening for these issues.

Scalability, cost, and the drive to reduce time to publication intersect with the need to maintain research integrity to create a conundrum for publishers. A 2017 study found that one-third of journals took more than two weeks to issue an immediate (desk) rejection, and one-sixth took more than four weeks. According to SciRev data, the average duration of peer review across all scientific fields is 17 weeks, and across all disciplines the average time to first response is 13 weeks. While a more recent review found an overall decrease in duration of peer review, Christos Petrou shows that this improvement is due almost entirely to speed increases by MDPI, which now averages 36 days from submission to acceptance and just five days from acceptance to publication.

On the other hand, as Serge P.M. Horbach notes, concerted efforts to review and publish new research more quickly “may raise concerns relating to the quality of the peer review process and of the resulting publications.”Luisa Schonhaut et al.,  compared publishing practices for COVID-19 and human influenza, and concluded that “The high number of acceptances within a day or week of submission and the number of retractions and withdrawals of COVID-19 papers might be a warning sign about the possible lack of a quality control process in scientific publishing and the peer review process.”

For researchers, the submission and review process can be painful, time-consuming, and full of friction — and, for many reasons, the burdens often fall most heavily on those early in their career, who are under particular pressure to publish, and are more likely to act as the corresponding author, but less likely to be the principal investigator. This means that they frequently have publishing-related administrative work delegated to them but rarely have anyone to delegate it to in turn. In addition, they don’t yet have much experience to rely on in writing publishable papers, or deciding which journals best suit their work.

The challenges researchers face at this stage in the publishing process include (but are not limited to) systems that require manually entering data that already exists in the files they’re submitting, lengthy and unpredictable response times from journals, lack of transparency in the peer-review process, varying submission requirements that require extensive reformatting to submit a rejected paper to a new journal, and various forms of potential bias in editorial and peer review.

The variety of types of OA agreements with libraries, institutions, and funders adds an additional hurdle along the road from submission to publication: which authors should be charged which fees, and what if no payment arrives?

So how can journals and publishers handle a growing number of submissions quickly and effectively, without compromising on the quality of their processes? As we move from a journal-centric to an author-centric publishing model, how can we make submission and review more efficient and less onerous for researchers?

Leveraging the digital ABCs in submission and review

While reviewing existing policies and processes to resolve pain points, as well as providing training for editors and peer reviewers, is obviously key to improving submission and peer review, intelligent services can help address many of these challenges by leveraging the digital ABCs (AI, big data, and cloud computing).

For example:

  • Use a SaaS model to provide end-to-end workflows to publishers, allowing them to plug applications such as authoring, review, payment tools in and out and seamlessly integrate with up- and downstream pipelines and applications.
  • Empower reviewers and editors to better evaluate submitted research through automated or semi-automated tools that help weed out poor-quality, plagiarized, or irrelevant submissions prior to peer review by checking quality, coverage, format, novelty, influence, and relevance etc.
  • Improve author satisfaction through more automated submission systems (e.g., authors verify auto-ingested metadata rather than re-entering it manually), auto-suggested reference and journal recommendations, auto-transfers, and automated formatting. Author-facing quality checks such as writing improvements, translation, and citation checking can also be offered through intelligent services.
  • Optimize operations by leveraging AI and big data to track the whole process, identifying patterns, bottlenecks, and opportunities for improvement.
  • Broaden and enrich reviewer and editor pools, using intelligent services to auto-suggest potential peer reviewers and auto-detect conflicts of interest.
  • Detect violations of research integrity including the products of “paper mills,” image manipulation, fake authors and reviewers, plagiarism, undeclared duplicate submissions, and indicators of sexism, racism, and other types of bias.
  • Measure review quality and rank reviewers over time based on the quality of their reviews and their response time; identify good reviewers more easily so that you can recognize their contributions and offer them incentives to review again.
  • Enrich content through automated recognition and disambiguation of authors and institutions; automated processing, correction, and linking of bibliographic references; and automated checks for citations to retracted articles or articles published in predatory journals.
  • Save time and reduce billing errors through more efficient handling of multiple OA agreements and APC rates. Automating author and institution disambiguation can also facilitate APC calculations and answer questions about eligibility within OA agreements.

Current limitations and future opportunities

Figure 1 below shows some examples of intelligent services currently available in the scholarly publishing industry (presented at the London Book Fair in April 2022).

logos of companies

The limitations of the current approach should be clear: Each of the vendors shown in Figure 1 (and there are more!) focuses on specific issues in specific domains, each requires different input and output formats and involves a different user experience, and none provides a comprehensive solution that can be seamlessly integrated into the publishing workflow, with minimal effort and training.

Nevertheless, we do already have many of the tools needed to achieve the goals of digital transformation in submission and peer review: reduced costs, improved margins, less wasted time, and earlier detection of research integrity issues; scalable solutions to meet publications’ needs both now and into the future; and an author-centric approach that offers a better experience for researchers and journal staff alike.

Looking forward, what do we see?

As the Open Science movement produces increasingly complex scientific analyses and rich research outputs that include not only articles but also data, models, physical samples, software, media, and more, those outputs also need to meet the FAIR criteria (findable, accessible, interoperable, and reusable). Developing shared storehouses for data, submissions, and images — a direction that STM publishers are heading in — could be key to making AI tools better trained, and thus more useful, allowing detection of integrity issues such as duplication and image manipulation across, as well as within, publications.

The greatest challenge, however, will be to find experienced and competent methodological reviewers for the growing number of submissions that include these rich research outputs. Tools using AI, big data, and the cloud will continue to develop to better facilitate the work of editors and reviewers.

Peer review can not only review submissions but also generate and review new knowledge — extracted via AI using domain-specific taxonomies/ontologies — which can be verified by both authors and editors. This work will help to build domain-specific knowledge graphs quickly and accurately, laying the foundation for moving towards cognitive intelligence and helping publishers to move from content providers to knowledge providers.

Machines will become writers, readers, and reviewers — at least to some extent. Research articles will need declarative and speculative statements, expressed semantically, to facilitate the large-scale pattern-matching and cross-referencing that lead to new insights. At the same time, we must remember that, in the absence of appropriate guard rails, advanced digital technologies — including AI tools — can be a double-edged sword, as we are already seeing. AI has the potential for misuse to create new threats (for example, fake videos, manuscripts, and images). Ethics and governance are critical as we implement AI solutions.

How to get started

  • Take the time to fully understand your existing policies, processes, data, and user behaviors. Even where solutions are immediately available, they can’t necessarily be integrated into your workflow and applied right away! A solid understanding of the here and now will help you answer critical questions: Do you have valid inputs or data to apply these solutions? Do your current practices need to be modified in order to adopt a new solution? If so, can you make those changes, and how long will the changes take? How will your existing business be affected? If the changes mean a temporary loss of productivity or revenue, are you prepared for that?
  • Wherever possible, break down data silos. Make sure your data are cleaned; store your data in a consistent machine-readable format, in a central database; and collect and store manually recorded data such as different types of disclosures, examples of improper bias, and COI statements. This approach offers a broader and richer view and creates opportunities for insight into, for example, your entire journal program or a whole discipline.
  • Build a user feedback loop to automatically measure the performance and value of the changes you’re making and increase data quality to build better solutions in the future.
  • Get buy-in, especially from editors and reviewers. Help them to better understand and use these tools/solutions in their daily work, be clear about benefits and limitations to manage their expectations, and explain how they can provide useful feedback to further improve these solutions iteratively.
  • Remember that existing workflows are a starting point, not the finish line. Automated tools and solutions can change workflows dramatically, and the same tools may make more sense at a different point in the workflow or deployed in a different way.

 

 

Hong Zhou

Hong Zhou

Hong Zhou leads the Intelligent Services Group in Wiley Partner Solutions, which designs and develops award-winning products/services that leverage advanced AI, big data, and cloud technologies to modernize publishing workflow, enhance content & audience discovery and monetization, and help publishers move from content provider to knowledge provider.

Sylvia Izzo Hunter

Sylvia Izzo Hunter is Manager, Product Marketing, Community & Content, at Wiley Partner Solutions; previously she was marketing manager at Inera and community manager at Atypon, following a 20-year career in scholarly journal and ebook publishing. She is a member of the SSP Diversity, Equity, Inclusion, and Accessibility Committee. She lives in Toronto.

Discussion

4 Thoughts on "Guest Post — Enabling Trustable, Transparent, and Efficient Submission and Review in an Era of Digital Transformation"

Hong thank you for your post. You present several concepts for the community to consider. What are your thoughts about XML. In your opinion where should XML be in the process? Also should publishers publish an XML?

Finally what are your thoughts around industry data standards and sharing of this data?

Thank you.

Darrell

Hi, Darrell, from my personal opinion, XML still plays a very important role in the near future publishing(Maybe changes in Metaverse :)), since online publishing relies on it for example JATS XML by NISO. It can be easily read by both humans and machines. Since AI is becoming smarter, it could not only be a writer but also a reader. I think publishers should also publish XML for machines to read to further improve content dissemination. But XML should be extended to record the richer information which is not only the metadata but also knowledge. For example, Resource Description Framework is used to store knowledge graphs with a set of attribute-level extensions and subject-predicate-object expressions

Thanks for this excellent and timely post, Hong and Sylvia! With the important caveat that I have an obvious commercial interest here, given my role at Aries Systems, I just wanted to remark that leveraging the “Digital ABCs” and creating an integrated ecosystem of AI-driven tools and services, which are embedded within the submission and review workflow as an integral part of the UX, are core facets of the approach that Aries Systems is adopting with Editorial Manager. To your point about achieving digital transformation goals, namely “reduced costs, improved margins, less wasted time, and earlier detection of research integrity issues”, we are certainly aiming to help publishers in this regard, focusing on use cases including integrity checks, data sharing, peer review workflows, manuscript quality checks and payments processing. Clearly Aries Systems isn’t alone in this focus, which is good news for the scholarly publishing community as a whole in terms of a direction of travel and an emerging availability of choice.

And finally, in a gratuitous piece of self-promotion, I will be giving a lightning talk on this very topic at the upcoming Researcher To Reader conference in London on 21-22 February!

Hi, Jason, Thank you for your comments. Yes, it is a community effort to make the better and more modern submission and review, especially in defining standards, AI ethics, and research misconduct such as papermill…

Comments are closed.