Editor’s note: To close out 2025, we asked the Chefs: What would you ask for from Academic Publishing Santa? Whether you believe or not, what are your holiday wishes for our community?

stock image of bright shooting star at night

Roohi Ghosh

Dear Santa,

If effort counts, many researchers would have earned a place on your nice list this year! They have done what was asked of them: worked hard, adapted quickly, learned new tools (there have been so many!), stayed open to AI while navigating the uncertainty around it, dealt with rejection, avoided the predatory journal trap, and tried their best to make their work visible.

Still, this year made one thing clear:  the publishing landscape has become more complex, more demanding, and far less forgiving than it once was. Perhaps some of this is transient and driven by the pace of change; yet, the pressure to publish has intensified, now compounded with the pressure to keep up! This pace of change affects not only researchers, but editors and reviewers as well, all operating within a system under strain.

My wish is for us to recalibrate and to course correct. The uncomfortable truth is that academia has evolved into a space where (unlike any other industry), the end client, i.e., the author, holds so little leverage. Those generating knowledge, innovating, and reviewing one another’s work have limited visibility into decision-making processes that carry significant career risk.

Take desk rejection, for example. A manuscript is declined with a brief note about “fit.” From the researcher’s side, that leaves much unsaid. Was it the scope? Timing? Policy? Framing? The paper is reworked, reformatted, and resubmitted, often without completely understanding what went wrong.

Another example is the growing confusion around responsible AI use. Policies differ not only between publishers but also between journals within the same portfolio. Author guidelines regarding AI use are often vague or inconsistent, and when something goes wrong, the reputational risk sits squarely with the researcher, even though the rules themselves were unclear to begin with.

What stands out to me is that in most industries, users help shape the systems they rely on. Feedback matters. Transparency is expected. In scholarly publishing, authors are central to the ecosystem, yet rarely treated as stakeholders in how workflows and policies are designed.

So, here’s my wish list on behalf of researchers:

  • My first wish is simple. I wish for a system that assumes good intent and prioritizes guidance over punishment. Workflows built around guidance instead of suspicion reduce risk rather than increase it. They improve compliance, reduce avoidable errors, and will even lighten the editorial load.
  • My next wish is for infrastructure that is more holistic, unified, and across the board. It needs to work across the research lifecycle. Right now, the journey feels disjointed. Could there be a simple research dashboard that tracks a researcher’s preferred journals to publish, collaborators for their next research project, guidelines for metadata, and GEO (generative engine optimization)? One that makes their work discoverable once published, ready-to-use templates, and one space for collaboration and support? I wish for a platform that allows dialogue and collaboration, and one that goes beyond being a tool but is actually a workplace built for their success, and where they can feel more connected with the publishers.

Roy Kaufman

Leaving aside the many reasons Santa will be skipping my house, if I were to ask for something, it would not be as difficult to achieve as “peace on Earth” — but would be harder to deliver than a pony. I would ask for more complete, more accurate, more granular, more up-to-date, more consistent, and more comprehensive metadata in scholarly workflows. By scholarly workflows, I mean everything from pre-submission to post-publication. And as metadata may change during this process, I would further request that it be current at all times.

Do you get why no one wants to invite me to their parties?

Randy Townsend

My wish for an Academic Publishing Santa is to resolve the peer review crisis. While many important conversations addressing the impacts of shifting government policies, technological advancements, and AI integrations justifiably attract our attention, the heart of academic publishing remains rooted in the integrity of our core activities and is protected by our loyal communities of peer reviewers. Preserving that integrity is paramount. On a fundamental level, these communities have always served the broader public by filtering out bad research and offering valuable feedback to help authors improve the quality and impact of their manuscripts. The critical need for their contributions has been significantly amplified by the targeted professional assault on prominent researchers and the erasure (and strategic reimagining) of validated and widely accepted research results that stand in direct opposition to specific ideological narratives and questionable motives. 

Peer reviewers are our guardians of research truths, and now they’re being asked to hold a line on what’s become a dangerously slippery slope. In an ironic turn of events, social media has emerged as the leading source of news in the United States at a time when these same platforms have dissolved their fact-checking policies. The saturation of sophisticated AI deep fakes and salacious misinformation campaigns increasingly contaminates these platforms and compromises our ability to distinguish fact from fantasy.

I want Academic Publishing Santa to deliver expanded and diversified pools of qualified peer reviewers along with engagement programs designed to appropriately recognize and support dedicated reviewers while training and mentoring new voices. If Santa can deliver on this, I believe we’ll be better positioned to fortify the integrity of legitimate research by reducing our vulnerability to the threats of paper mills, nefarious peer reviewer rings, misinformation, fabrication and falsification.

Tim Vines

I’m going to ask Academic Publishing Santa to give the gift of journal submission fees. APCs are eye-wateringly expensive because the authors of publishable articles have to cover the cost of triaging, reviewing, and rejecting (1/acceptance rate)-1 lower-quality articles. The coming avalanche of AI slop means that acceptance rates will fall while the costs of triaging and filtering the garbage will go up, so APCs will have to rise even higher in response. Announcing APC price rises when everyone else is questioning the fundamental value proposition of journals will put you on the naughty list for next Christmas.

There is another way: have authors pay for the costs of triaging and reviewing their own article by charging a submission fee. Yes, this will deter authors with articles that have only a small chance of being accepted. This is a good thing. The avalanche of AI slop will go elsewhere, and good riddance. If an article gets accepted, then charge the authors a publication fee. You can set submission and publication fees to match your APC revenue while keeping the fee for published authors reasonable (under $2000), no matter your acceptance rate.

If mandating submission fees is too big a step, then you can just offer authors the option of paying either an APC at acceptance or a fee at initial submission plus a publication fee at acceptance — allowing this choice is still revenue neutral.

By moving to submission fees, publishers can — at a stroke — steer away bad actors and move from the deeply unpopular APC model to a more equitable Open Access pricing system, all the while sustaining or increasing journal revenues. Thanks, Academic Publishing Santa!

Hong Zhou

Hong’s Academic Publishing Santa Wish List: AI, Integrity, and What Comes Next

Wish #1: A Credible, Community-Trusted AI Tools Leaderboard
The scholarly AI ecosystem has become a holiday bazaar: dozens of AI translation tools, 40-plus summarization tools, more than 10 AI-driven research discovery tools, over 50 AI-text detectors, a growing constellation of image-forensics offerings, and an ever-expanding list of “trust solutions,” all promising certainty while delivering… nuance. What we need is not yet another tool, but clarity. My wish is for a transparent, community-governed leaderboard built on shared criteria, gold-standard datasets, and clear use-case guidance, much like the LLM benchmarks the AI world has come to rely on. Editors don’t just need to know what scores highest; they need to know what works best, when, and why.

Wish #2: Less Prompt Alchemy, More Scientific Foundations
Integrity products are rapidly converging, and not always for the better. Too many offerings differentiate on UI, workflow, or clever prompt engineering, while relying on the same underlying signals. My wish for startups and their investors is a renewed focus on fundamentals: differentiation between AI-generated versus AI-polished content, deep reproducibility analysis, statistical anomaly detection, reference and citation analysis, robust author and institution disambiguation. These are hard problems, but they are the ones that deliver durable value. Build deeper, more reliable detection engines, and let platforms, publishers, and aggregators integrate them with confidence.

Wish #3: A First-Class, Transparent Home for AI-Assisted and AI-Generated Research
AI-assisted and increasingly AI-generated research is not a future problem; it is already here, and pretending otherwise only drives it underground. At the same time, as AI systems become more capable and more “AI scientists” emerge, AI-generated research will only increase and, in many cases, deliver real scholarly value, so rejecting or pushing it aside is neither realistic nor productive. In the early stages of this wave, it may be prudent to keep such work distinct from conventional research streams, at least until the research community broadly accepts them. My wish is for our community, such as preprint servers and publishers in particular, to create a clearly labelled, dedicated space for this work and actively encourage transparent submission of AI-generated research, with explicit requirements around methods, prompts, models, and datasets, enhanced integrity checks, and structured community review, while ensuring it remains fully indexed, discoverable, and citable. In short, we should curate AI-generated research thoughtfully rather than forcing it to masquerade as something it is not.

If Academic Publishing Santa delivers even one of these, I promise to be very good next year.

Scholarly Kitchen

Scholarly Kitchen

The Scholarly Kitchen account is used for anonymous posts, housekeeping posts at the blog, posts from the Society for Scholarly Publishing, and a few other purposes.

Roohi Ghosh

Roohi Ghosh

Roohi Ghosh is the ambassador for researcher success at Cactus Communications (CACTUS). She is passionate about advocating for researchers and amplifying their voices on a global stage.

Roy Kaufman

Roy Kaufman

Roy Kaufman is Managing Director of both Business Development and Government Relations for the Copyright Clearance Center (CCC). Prior to CCC, Kaufman served as Legal Director, John Wiley and Sons, Inc. He is a member of, among other things, the Bar of the State of New York, the Author’s Guild, and the editorial board of UKSG Insights. Kaufman also advises the US Government on international trade matters through membership in International Trade Advisory Committee (ITAC) 13 – Intellectual Property and the Library of Congress’s Copyright Public Modernization Committee in addition to serving on the Board of the United States Intellectual Property Alliance (USIPA).

Randy Townsend

Randy Townsend

Randy Townsend is a passionate advocate for scholarly publishing, with nearly 20 years of professional experience. At the American Geophysical Union, he led and contributed to initiatives focused on open data, research integrity, peer review, editor engagement, and publishing policy. A committed champion of Diversity, Equity, Inclusion, and Accessibility (DEIA), Randy has co-chaired DEIA committees for nonprofit organizations including the Society for Scholarly Publishing (SSP). He has served on the advisory board of the Association Media & Publishing Network’s Association Council, as a member of the SSP Board of Directors, and chaired the Council of Science Editors’ Webinar Subcommittee. During his term as SSP President, Randy launched a mental health awareness campaign, reflecting his dedication to supporting the well-being of the publishing community. As the founding Editor-in-Chief of the award-winning GW Journal of Ethics in Publishing, Randy is deeply committed to research integrity and to mentoring future leaders devoted to ethical publishing practices. He also serves as an Associate Professor in George Washington University’s top-ranked Master of Professional Studies in Publishing Program, where he continues to inspire and shape the field’s future. After a brief tenure at PLOS, Randy now consults with Origin Editorial, where he leads peer review engagement strategy. Outside of work, he enjoys gardening and grilling — often while still talking shop.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Hong Zhou

Hong Zhou

Dr. Hong Zhou is VP of Product Management at KnowledgeWorks Global Ltd., where he guides product vision and strategy, leads cross-functional teams, and drives innovation across publishing solutions for researchers, librarians, and publishers worldwide. Previously, he was Senior Director of AI Product & Innovation at Wiley, defining AI strategy and leading the roadmap. He helped shape Wiley’s AI ethics principles, advanced the Wiley Research Exchange and Atypon platforms, and led development of Wiley’s first AI-driven papermill detection tool, which won the 2025 Silver SSP EPIC Award for Excellence in Research Integrity Tools. He is a recognized industry leader in AI, product innovation, and workflow transformation. He also received an individual honorable mention for the 2024 APE Award for Innovation. He holds a PhD in 3D Modelling with AI and an MBA in Digital Transformation (Oxford University). He also serves as a COPE Advisor, Scholarly Kitchen Chef, Co-Chair of ALPSP’s AI Special Interest Group, and Distinguished Expert at China’s National Key Laboratory of Knowledge Mining for Medical Journals.

Discussion