Editor’s note: To close out 2025, we asked the Chefs: What would you ask for from Academic Publishing Santa? Whether you believe or not, what are your holiday wishes for our community?

Roohi Ghosh
Dear Santa,
If effort counts, many researchers would have earned a place on your nice list this year! They have done what was asked of them: worked hard, adapted quickly, learned new tools (there have been so many!), stayed open to AI while navigating the uncertainty around it, dealt with rejection, avoided the predatory journal trap, and tried their best to make their work visible.
Still, this year made one thing clear: the publishing landscape has become more complex, more demanding, and far less forgiving than it once was. Perhaps some of this is transient and driven by the pace of change; yet, the pressure to publish has intensified, now compounded with the pressure to keep up! This pace of change affects not only researchers, but editors and reviewers as well, all operating within a system under strain.
My wish is for us to recalibrate and to course correct. The uncomfortable truth is that academia has evolved into a space where (unlike any other industry), the end client, i.e., the author, holds so little leverage. Those generating knowledge, innovating, and reviewing one another’s work have limited visibility into decision-making processes that carry significant career risk.
Take desk rejection, for example. A manuscript is declined with a brief note about “fit.” From the researcher’s side, that leaves much unsaid. Was it the scope? Timing? Policy? Framing? The paper is reworked, reformatted, and resubmitted, often without completely understanding what went wrong.
Another example is the growing confusion around responsible AI use. Policies differ not only between publishers but also between journals within the same portfolio. Author guidelines regarding AI use are often vague or inconsistent, and when something goes wrong, the reputational risk sits squarely with the researcher, even though the rules themselves were unclear to begin with.
What stands out to me is that in most industries, users help shape the systems they rely on. Feedback matters. Transparency is expected. In scholarly publishing, authors are central to the ecosystem, yet rarely treated as stakeholders in how workflows and policies are designed.
So, here’s my wish list on behalf of researchers:
- My first wish is simple. I wish for a system that assumes good intent and prioritizes guidance over punishment. Workflows built around guidance instead of suspicion reduce risk rather than increase it. They improve compliance, reduce avoidable errors, and will even lighten the editorial load.
- My next wish is for infrastructure that is more holistic, unified, and across the board. It needs to work across the research lifecycle. Right now, the journey feels disjointed. Could there be a simple research dashboard that tracks a researcher’s preferred journals to publish, collaborators for their next research project, guidelines for metadata, and GEO (generative engine optimization)? One that makes their work discoverable once published, ready-to-use templates, and one space for collaboration and support? I wish for a platform that allows dialogue and collaboration, and one that goes beyond being a tool but is actually a workplace built for their success, and where they can feel more connected with the publishers.
Roy Kaufman
Leaving aside the many reasons Santa will be skipping my house, if I were to ask for something, it would not be as difficult to achieve as “peace on Earth” — but would be harder to deliver than a pony. I would ask for more complete, more accurate, more granular, more up-to-date, more consistent, and more comprehensive metadata in scholarly workflows. By scholarly workflows, I mean everything from pre-submission to post-publication. And as metadata may change during this process, I would further request that it be current at all times.
Do you get why no one wants to invite me to their parties?
Randy Townsend
My wish for an Academic Publishing Santa is to resolve the peer review crisis. While many important conversations addressing the impacts of shifting government policies, technological advancements, and AI integrations justifiably attract our attention, the heart of academic publishing remains rooted in the integrity of our core activities and is protected by our loyal communities of peer reviewers. Preserving that integrity is paramount. On a fundamental level, these communities have always served the broader public by filtering out bad research and offering valuable feedback to help authors improve the quality and impact of their manuscripts. The critical need for their contributions has been significantly amplified by the targeted professional assault on prominent researchers and the erasure (and strategic reimagining) of validated and widely accepted research results that stand in direct opposition to specific ideological narratives and questionable motives.
Peer reviewers are our guardians of research truths, and now they’re being asked to hold a line on what’s become a dangerously slippery slope. In an ironic turn of events, social media has emerged as the leading source of news in the United States at a time when these same platforms have dissolved their fact-checking policies. The saturation of sophisticated AI deep fakes and salacious misinformation campaigns increasingly contaminates these platforms and compromises our ability to distinguish fact from fantasy.
I want Academic Publishing Santa to deliver expanded and diversified pools of qualified peer reviewers along with engagement programs designed to appropriately recognize and support dedicated reviewers while training and mentoring new voices. If Santa can deliver on this, I believe we’ll be better positioned to fortify the integrity of legitimate research by reducing our vulnerability to the threats of paper mills, nefarious peer reviewer rings, misinformation, fabrication and falsification.
Tim Vines
I’m going to ask Academic Publishing Santa to give the gift of journal submission fees. APCs are eye-wateringly expensive because the authors of publishable articles have to cover the cost of triaging, reviewing, and rejecting (1/acceptance rate)-1 lower-quality articles. The coming avalanche of AI slop means that acceptance rates will fall while the costs of triaging and filtering the garbage will go up, so APCs will have to rise even higher in response. Announcing APC price rises when everyone else is questioning the fundamental value proposition of journals will put you on the naughty list for next Christmas.
There is another way: have authors pay for the costs of triaging and reviewing their own article by charging a submission fee. Yes, this will deter authors with articles that have only a small chance of being accepted. This is a good thing. The avalanche of AI slop will go elsewhere, and good riddance. If an article gets accepted, then charge the authors a publication fee. You can set submission and publication fees to match your APC revenue while keeping the fee for published authors reasonable (under $2000), no matter your acceptance rate.
If mandating submission fees is too big a step, then you can just offer authors the option of paying either an APC at acceptance or a fee at initial submission plus a publication fee at acceptance — allowing this choice is still revenue neutral.
By moving to submission fees, publishers can — at a stroke — steer away bad actors and move from the deeply unpopular APC model to a more equitable Open Access pricing system, all the while sustaining or increasing journal revenues. Thanks, Academic Publishing Santa!
Hong Zhou
Hong’s Academic Publishing Santa Wish List: AI, Integrity, and What Comes Next
Wish #1: A Credible, Community-Trusted AI Tools Leaderboard
The scholarly AI ecosystem has become a holiday bazaar: dozens of AI translation tools, 40-plus summarization tools, more than 10 AI-driven research discovery tools, over 50 AI-text detectors, a growing constellation of image-forensics offerings, and an ever-expanding list of “trust solutions,” all promising certainty while delivering… nuance. What we need is not yet another tool, but clarity. My wish is for a transparent, community-governed leaderboard built on shared criteria, gold-standard datasets, and clear use-case guidance, much like the LLM benchmarks the AI world has come to rely on. Editors don’t just need to know what scores highest; they need to know what works best, when, and why.
Wish #2: Less Prompt Alchemy, More Scientific Foundations
Integrity products are rapidly converging, and not always for the better. Too many offerings differentiate on UI, workflow, or clever prompt engineering, while relying on the same underlying signals. My wish for startups and their investors is a renewed focus on fundamentals: differentiation between AI-generated versus AI-polished content, deep reproducibility analysis, statistical anomaly detection, reference and citation analysis, robust author and institution disambiguation. These are hard problems, but they are the ones that deliver durable value. Build deeper, more reliable detection engines, and let platforms, publishers, and aggregators integrate them with confidence.
Wish #3: A First-Class, Transparent Home for AI-Assisted and AI-Generated Research
AI-assisted and increasingly AI-generated research is not a future problem; it is already here, and pretending otherwise only drives it underground. At the same time, as AI systems become more capable and more “AI scientists” emerge, AI-generated research will only increase and, in many cases, deliver real scholarly value, so rejecting or pushing it aside is neither realistic nor productive. In the early stages of this wave, it may be prudent to keep such work distinct from conventional research streams, at least until the research community broadly accepts them. My wish is for our community, such as preprint servers and publishers in particular, to create a clearly labelled, dedicated space for this work and actively encourage transparent submission of AI-generated research, with explicit requirements around methods, prompts, models, and datasets, enhanced integrity checks, and structured community review, while ensuring it remains fully indexed, discoverable, and citable. In short, we should curate AI-generated research thoughtfully rather than forcing it to masquerade as something it is not.
If Academic Publishing Santa delivers even one of these, I promise to be very good next year.