Editor’s Note: Today’s post is by Scholarly Kitchen Chef Hong Zhou and Marie Soulière. Marie is Head of Editorial Ethics and Quality Assurance at Frontiers, as well as a COPE Council Member.

On July 1, the Committee on Publication Ethics (COPE) hosted a Forum discussion on Emerging AI Dilemmas in Scholarly Publishing, exploring the main challenges facing the scholarly community. Top of mind, of course, is how rapidly evolving AI technologies are transforming the way research is conducted, reported, reviewed, and published, raising opportunities and also complex ethical and practical issues for all parties involved. Following the discussion, during which only a fraction of the points raised were discussed — and with COPE’s agreement   — we used AI (and our human brains!) to map the rest of the questions and comments from the attendees into four main themes:

  1. Responsible and ethical use of AI
  2. Transparency and disclosure
  3. Detection and editorial standards
  4. Impact on peer Review, equity, and inclusion

 Person holding icons for AI ethics, regulations, law, fairness, transparency, data protection, privacy, and ethical practices.

Responsible and Ethical Use of AI

What constitutes fair, responsible, and ethical use of AI across various roles (authors, reviewers, editors, institutions, publishers)? Researchers and authors wonder where limits should be drawn on AI use in writing, reviewing, generating images, translating, and handling data. The STM Association has provided guidance with their 2023 guide to Generative AI in Scholarly Communications, and recent additional draft classification of AI use in manuscript preparation, which is open for community feedback. Publishers are following suit: for example, Wiley updated the AI use section of Wiley General Guidance and Best Practice for journal authors in March 2025, and has recently conducted extensive author interviews as the basis for more detailed guidance for research authors to be released later this year, having released guidance for learning authors (trade, higher ed) already in March 2025.

Users should remember that AI is a tool and not an accountable author or reviewer. Since 2023, there has been more of a focus on ensuring the ethical use of AI by authors. Now the discussion is shifting to editors and reviewers. There is a rise in the use of AI for peer review, from integration of AI tools in many publishing systems for screening manuscripts for scope, quality, and integrity, to assisting editors in finding relevant reviewers, as well as AI support for reviewers to conduct their review. Yet most researchers remain unaware of the ethical and legal concerns over the ‘simple’ act of copy/pasting or uploading unpublished content into external AI platforms. They may not have been clearly informed that this may constitute a breach of intellectual property, confidentiality, privacy, or other rights. The content they share could very well be ingested (and possibly reused) by the tools. The provided review reports can also include crude inaccuracies, hallucinated references, or biases due to the training material and lack of critical thinking by the AI tool used. In addition, authors aware of the use of AI tools to generate reviews — despite this being discouraged by  many publishersfor the reasons listed above — have recently started prompt-injection tricks to bias AI into generating positive review comments for manuscripts.

Beyond the creation of best practices and guidelines, there is a clear need for journals and publishers to offer more training and materials to support editors and reviewers in the responsible use of AI for authors, assisting review and decision-making (e.g., Frontiers’ editor webinar on the ethical use of AI, COPE’s AI decision-making guide.)

Further advocacy at the level of research institutions, in collaboration with libraries and research integrity offices, is likely a key aspect for the responsible adoption and use of AI. Taking this governance upstream is more likely to yield long-term results for the responsible use of AI, as it also relies on transparency and disclosure, allowing all stakeholders to understand when and how AI tools are used in the research and publication process.

Transparency and Disclosure

The principle of transparency was raised during the COPE Forum, best summarized by Viktor Wang of the California State University pointing out that: “AI transparency is not about shame — it’s about intellectual honesty.”

Sharing specific examples and use cases of the appropriate (or inappropriate) use of generative AI, as well as clear disclosure templates — or direct integration of AI disclosure queries in submission systems — is key to supporting authors with transparency and best practices.

Similarly, for editors, reviewers, and publishers, the use of AI tools should be disclosed openly. You used AI to polish your review? Disclose it. You’re a publisher using AI to suggest potential reviewers? Make that clear.

The use of AI tools in publishing and beyond is here to stay and does present valuable opportunities. By embracing disclosure and transparency, we embrace the reality of widespread AI use, foster trust, and strengthen our global understanding of fair and responsible use.

The COPE discussion veered towards editor uncertainty on how to handle suspected but undisclosed AI use by authors. While there is no uniform, formal guidance on this to date, our recommendation is to approach this in a similar manner to other types of potential integrity concerns where proof is lacking (e.g., suspected undisclosed conflicts of interest, or potential papermill characteristics such as nonsensical or standardized phrases, or authors from unrelated or geographically distant institutions):
1. Query but don’t accuse:
Contact the authors in a neutral tone noting that the features of the manuscript are consistent with generative AI assistance, and that, per journal policy, they should clarify whether any such tools were used in the preparation of the manuscript. Ensure authors update the disclosure statement.

  1. Encourage transparency:
    Make clear that the aim is transparency. Avoid rejecting individual manuscripts based solely on the suspicion of undisclosed AI.
  2. Proceed proportionally to evidence:
    If the author(s) cannot satisfy the concerns regarding the use of the AI tool, then the editor reserves the right to reject the manuscript. Journals might also consider escalating their concerns to an institution or ethical committee if evidence of research misconduct is identified.

Yet, what is all this guidance good for if no one enforces best practices? This is where the next theme comes into play…

Detection and Editorial Standards

Which tools can editors and publishers employ to uphold standards? Are there strategies available to preserve research quality in the face of increased use of AI in the research and manuscript drafting process?

Collectively, there’s an acknowledgement that maintaining integrity is becoming more challenging, and that no single tool can address this. Guidance for editors and publishers on how to address breaches and maintain editorial standards remains limited. Accusations of the use — or rather the misuse — of AI or of falsified data can have serious consequences for researchers’ careers and backfire on editorial programs.

As highlighted above, AI use for writing or assisting review isn’t the problem; the focus should be on ensuring consistent standards, accountability and transparency, and identifying issues with mis-citations, invalid data, erroneous conclusions, data security, and the gibberish content that AI sometimes generates.

Current AI-text detection solutions fall into three main categories:

  1. Machine learning classifiers that learn patterns such as sentence, structure and grammar in AI vs. human writing from examples. These updates may struggle with new models and are opaque “black boxes”.
  2. Word-signature lists that flag telltale phrases such as, “As my knowledge cutoff…” Explainable and simple yet easily defeated by light editing.
  3. “Humanness” metrics that check if text is too “predictable” or “robotic”. Can miss AI that’s well edited and are fading as new AI models write more like humans.

Most current detectors are reactive: they need training data from new LLMs, struggle to separate “AI-assisted” from “AI-authored,” and show fragile accuracy — especially after translation or editing — leading to false positives and missed hits.

More than 50 AI-detection products now crowd the market: commercial detectors like Turnitin, Grammarly, Copyleaks, Winston AI, and Originality.ai; publisher-built tools such as Wiley’s AI-generated content detector in Research Exchange Screening and Springer Nature’s “Geppetto”; and free/open-source options such as Detect-GPT, Binoculars, QuillBot, Hugging Face detectors, and Scribbr. Beyond this reactive detection layer, initiatives such as Project Origin and Google’s SynthID embed provenance watermarks directly into generated content to trace ownership and deter misuse. Those schemes promise accurate results but face practical hurdles — watermarks can be tampered with or altered, and they’ll only work if they’re widely adopted by both commercial and open-source model providers.

The accuracy of AI content detectors varies wildly by discipline, training data, and how heavily a text has been edited, so false positives are common. If you use them, treat them as hints, not verdicts — a flag to trigger deeper checks alongside other clues — like fake references, suspect emails, manipulated images, or gibberish content. Detection tools can help identify suspect manuscripts, but they’re not foolproof and must be used in tandem with broader integrity checks.

Looking ahead, we believe the focus will move from “catching AI-generated text” to checking whether authors follow AI-use disclosure rules and editorial standards. Detection will become a routine quality-control step — no longer treated as a special integrity check. But what will still matter — and matter even more — is solid human oversight: making sure the data, citations, research methodology, and critical analysis are sound, regardless of how polished the text appears.

Impact on Peer Review, Equity, and Inclusion

AI has been legitimately applied for years in screening for various supporting tasks, including flagging similarity of content, out-of-scope content, irrelevant or missing references, language problems, potential conflicts of interest, statistical or image irregularities, and paper mill clues. It also suggests qualified reviewers, generates brief summaries that spotlight key claims or gaps, and provides templates and literature checks to help reviewers write clearer, more thorough reports — speeding up screening while sharpening feedback and reproducibility checks, and allowing expert reviewers to focus their attention on the research itself.

This adoption of AI-assisted but human-led publishing presents both opportunities and challenges for global equity in publishing.

AI tools can streamline and democratize many aspects of the publishing process, such as editing, translation, alt-text generation, and speech recognition. These tools can help authors from underrepresented backgrounds and regions overcome language and accessibility barriers and improve manuscript quality. However, many worry they will be penalized by detection tools flagging their manuscripts as AI-generated — emphasizing the need for clear recognition that such content is not problematic per se, and complementary issue-detection methods are necessary to ensure fairness.

Unequal access to advanced AI tools risks deepening existing inequalities — not all researchers, journals, or publishers can afford them. Moreover, biases embedded in AI systems can reinforce current inequities if not carefully managed. Developers must be mindful of inclusion and diversity, ensuring that their technologies accommodate a wide range of linguistic, cultural, and regional contexts.

AI can democratize access, raise standards, and help empower researchers worldwide. At the same time, it poses emergent threats to research integrity, trust, and the human-centered values at the heart of science.

Conclusion

The dilemmas surrounding AI in scholarly publishing are both profound and, at times, paradoxical. Our community’s duty is to strike the right equilibrium — one that fosters innovation while defending the principles that define good scholarship and transparent publishing.

As we step into the agentic AI world — where humans lead a group of digital team members or where AI agents collaborate with other agents with greater autonomy, reasoning, and initiative — the success of AI adoption in scholarly publishing mainly depends on three key elements:

Trust – AI must be transparent, explainable, and consistent to earn the confidence of authors, reviewers, editors, and publishers.

Collaboration – We need to rethink collaboration as human-AI interaction. The best outcomes emerge when humans remain at the center of decision-making, guiding and validating AI as a true thought partner.

Governance – Strong policies, oversight, and boundaries are vital to ensure AI systems do what they’re designed to do, and nothing beyond, and that known biases are addressed.

We are shaping the future of knowledge, and it’s important to make sure AI works with us, not instead of us. As we collectively imagine the future of ethical, equitable science in an AI-driven world, there are no quick or static answers. Policies will continue to adapt and so too must our shared commitment to integrity, transparency, inclusivity, and vigilance. Let’s keep the conversation going — share your policies, case studies, or questions below or on COPE’s Emerging AI Dilemmas in Scholarly Publishing Topic Discussion page.

Disclaimer 1: A proprietary AI tool assisted in summarizing Forum queries and suggesting rephrasings, with all facts verified by the authors. A (human) copyeditor also helped polish the final text.

Disclaimer 2: Marie is employed by Frontiers, and an elected Council member for the Committee on Publication Ethics (COPE). Hong is employed by Wiley, and a COPE advisor. All views expressed in this publication are the authors’ and do not reflect the official stance of Frontiers, Wiley, or COPE.

Hong Zhou

Hong Zhou

Dr. Hong Zhou is VP of Product Management at KnowledgeWorks Global Ltd., where he guides product vision and strategy, leads cross-functional teams, and drives innovation across publishing solutions for researchers, librarians, and publishers worldwide. Previously, he was Senior Director of AI Product & Innovation at Wiley, defining AI strategy and leading the roadmap. He helped shape Wiley’s AI ethics principles, advanced the Wiley Research Exchange and Atypon platforms, and led development of Wiley’s first AI-driven papermill detection tool, which won the 2025 Silver SSP EPIC Award for Excellence in Research Integrity Tools. He is a recognized industry leader in AI, product innovation, and workflow transformation. He also received an individual honorable mention for the 2024 APE Award for Innovation. He holds a PhD in 3D Modelling with AI and an MBA in Digital Transformation (Oxford University). He also serves as a COPE Advisor, Scholarly Kitchen Chef, Co-Chair of ALPSP’s AI Special Interest Group, and Distinguished Expert at China’s National Key Laboratory of Knowledge Mining for Medical Journals.

Marie Soulière

Dr. Marie Soulière leads strategic initiatives in open-access publishing, focusing on publication ethics, research integrity, and high-quality peer review. She effectively balances maintaining high standards with achieving operational efficiency through the integration of quality assurance proceedures and artificial intelligence tools. Notably, she played a pivotal role in the development of Frontiers’ AI Review Assistant (AIRA).

Discussion

2 Thoughts on "From Detection to Disclosure — Key Takeaways on AI Ethics from COPE’s Forum"

Very timely piece. One development that might complement the issues raised here is the recently introduced GAIDeT (Generative AI Delegation Taxonomy) https://doi.org/10.1080/08989621.2025.2544331 Instead of generic statements like “AI was used,” it offers a structured way to describe what exactly was delegated to AI and under whose oversight. This can make disclosure more meaningful, help reduce reliance on unreliable detection tools, and lower the risk of stigmatizing legitimate uses such as language support. In that sense, GAIDeT connects well with COPE’s call for transparency and could be a step toward the kind of trustworthy, evidence-based practice others are also advocating.

Comments are closed.