Editor’s Note: Today’s post is by Mandy Hill. Mandy is the Managing Director, Cambridge University Press.
The need for high-quality research has never been greater. Global investment in research continues to grow as nations seek to accelerate solutions to challenges from climate change to global health crises. In an age of misinformation, academic publishing plays an increasingly vital role in ensuring this knowledge reaches and benefits communities worldwide. Yet that system is now reaching a breaking point. Unless we redesign it, we risk stalling the very progress we seek – with consequences impacting research, education and public trust in academia.
The open research paradox
On the one hand, more research is available to more people than ever before. That should be celebrated. At Cambridge University Press, we’re publishing more than 70% of our research articles open access every month, thanks to the transformative agreements we’ve made with over 2,700 institutions globally. Across the industry, more than 50% of research articles are barrier-free. At Cambridge, we have shown that the transition to open access publishing can be sustainable across the full range of subjects, but we have also seen the increasing friction,and waste in the system that is making it harder to realize the real opportunities and full ambitions of open research.
We all see the benefits of making research freely available: greater visibility, wider reach, and more impact. Take COVID-19 as a case in point: at the start of the pandemic, researchers and publishers moved quickly to make papers related to what was then unnamed and known as novel coronavirus 2019-nCoV freely available. The World Health Organization created a database aggregating the global open research, helping accelerate global collaboration and public health response. With our articles and books, of all disciplines, we can track the increased global reach of open access content, benefiting society worldwide.
On the other hand, the volume of published research is now skyrocketing to unsustainable levels. As of 2023, it is estimated that more than 3 million research articles are published each year – triple the output from two decades ago. The sheer volume of submissions, which are growing even faster than publications, means increased cost and pose a risk to integrity, with low-quality articles, produced because of distorted publishing incentives, threatening to drown out high-quality work. This will be further exacerbated by AI. Peer review, the bedrock of academic trust, is being stretched to its limits.
Whilst we believe open access is the right model for academic publishing because it demonstrably increases the impact of research, the real question is not how far or fast the transition to open should be, but what does today’s radically different global environment mean for the role of publishers and how our services are valued: How do we maximize the impact of research instead of how do we maintain the status quo?
There have been many articles and reports, including some in The Scholarly Kitchen, highlighting some of these challenges. It feels like we are now reaching a point where there is mounting desire to turn the identification of problems into tangible solutions.
Publishing more and more in journals is not affordable within library budgets and makes it increasingly unfeasible to maintain the rigor of peer review. Many are now calling for new workflows and technologies to address these issues, but I worry though, that whilst these might be part of a solution, it risks just moving the problem instead of solving it.
Our conclusion is that radical change is required including that the total number of journal articles being published needs to be reduced. We need to be publishing less, but better.

Looking beyond the research article
We are very proud of our journals program and will continue to develop it to support research communities and publishing partners. But not every research output needs to be a published journal article. While high-quality journals will always have a vital place in academia and should be nurtured, we urgently need credible, lower-cost, and scalable alternatives that are seen as attractive dissemination pathways by researchers for some research outputs. Diversifying how research is published — beyond the traditional article — reflects the reality of today’s scholarship and can ease the strain on peer review. But real change means tackling the incentives that drive publishing habits, from university reward systems to funders’ expectations.
The future of academic publishing
This year, Cambridge University Press undertook research on these challenges, gathering input from more than 3,000 researchers, publishing partners, funders, librarians and publishers from 120 countries. Our intention was to understand the challenges involved in moving toward a more open future for academic publishing. However, our review uncovered deeper, systemic issues affecting the entire research publishing ecosystem.
While less than a third (32%) of respondents said they believed the current publishing system is in a good position to meet future challenges, perhaps more encouragingly, more than half (56%) felt the system could be sustainable with current funding sources, if allocated effectively.
There have been a number of other reports on some of the same issues including Elsevier’s Researcher of the Future report which says that 68% of respondents said the pressure to publish is greater than it was two to three years ago. These underlying pressures aren’t going away unless we work together to affect change.
The path forward
Earlier this year we published our research findings and recommendations in a report – Publishing futures: Working together to deliver radical change in academic publishing – that we hope offers a first step forward for sector-wide collaboration and action. In the report, we identify a set of interconnected actions for addressing the complex systemic issues within the publishing ecosystem:
- Adapt academic reward and recognition systems to value quality over quantity, and to support diverse research outputs.
- Make equity a guiding principle for open access by enhancing global coordination, institutional support, and awareness to address systemic inequities, particularly for authors in low- and middle-income countries.
- Support credible, scalable alternative publishing platforms such as preprint servers, community-owned platforms, and diamond open access models, in addition to traditional books and journals with increasing publisher transparency on costs and demonstration of the value of services provided.
- Recognize peer review as a core academic contribution with appropriate reward structures, providing training and support for reviewers, and embracing responsible technological solutions to address peer review at scale.
There are no easy answers or quick solutions to each of these points. Many of the elements for change have been out there for some time, e.g., through the work of DORA and the importance of arXiv in some scientific communities, but we have not reached the tipping point of seismic change that is required. But if we act together — convening, listening, and experimenting — we can create an ecosystem that is open by default, equitable by design, and fit for the future. The choices we make now will shape the research ecosystem for generations to come.
We need to build a system that truly serves the global pursuit of knowledge. It’s time to stop tinkering at the edges and waiting for someone else to sort out the problems to ensure publishing remains a valued service to academia.
Discussion
16 Thoughts on "Guest Post: Academic Publishing Is Not Fit for the Future – If We Don’t Act Now, The Vital Role Research Plays in Society Is at Risk"
These are very powerful insights! The current academic and publishing models are at a breaking point, and unless both undergo transformative change soon, they risk losing even more credibility in a world that already views science as less trustworthy than before – a dangerous prospect.
I completely agree that the total number of submissions needs to go down. For me (and perhaps I’m biased as a researcher), the most urgent issue is fixing academic reward and recognition systems. Academia is oversaturated: PhD graduation rates keep rising while the number of faculty positions is stagnant or shrinking. In the UK, for example, recent immigration policies have drastically reduced the number of overseas undergraduates, pushing universities into financial crisis. The consequences are mass redundancies, department closures, and cancelled courses, not to mention less money for libraries to negotiate with publishers. The few PhD graduates who eventually reach the “holy grail” of professorship do so after years of immense pressure, because publishing – a lot, and ideally with high impact – remains the main criterion for promotion. This bloats submission numbers, overwhelms publishers, and damages researchers’ work–life balance. In such a cutthroat world, some turn to subterfuges to remain relevant, such as AI-generated text and figures, citation cartels, and paper mills, contributing to what is effectively a research integrity crisis, and placing a huge strain on editorial teams.
So, how do we change “publish or perish”? I don’t have the full answer, but I suspect an important factor is funding. Universities prioritise publication metrics because these drive rankings and therefore income. Funders could help shift the system by reducing the weight placed on publication quantity and valuing other outputs more: patents, industry collaborations, community engagement, teaching, student supervision, and, importantly, peer review work!
But I wonder, are the publishers ready for a world with fewer submissions? While their editorial staff seems to be running on fumes, at the same time, the business models rely deeply on submission revenue. So, how to balance that? That’s something I honestly don’t know!
Victoria, thank you for your great comment and insights from the researcher perspective. I think that challenge is that no single person or group does have all of the answers and we have to really collaborate to find them and make them work.
I also think that all stakeholders will have to accept some potentially uncomfortable change – we have to remember what we are here for and look at how best to achieve that in today’s context. But that doesn’t mean we throw away everything, because there is a lot that is great in academic publishing.
Funding!!! The question society must address is: Is higher education a public good or an individual responsibility? If the latter, nothing will change if the former, everything changes.
These issues have been visible and problematic for DECADES. A weakness is at the start of a submission-Peer review, a thankless system for the reviewers and often treated as such- also one that is leaky by its very nature which consumes time and treasure. As the editorial notes, even the publishers admit that article flow is critical for their cash flow. Who will “bell the cat” when there is no clear cost or set of criteria across the spectrum from author to the published article.
One possibility is open and signed peer review at the gateway where costs can be visible and amortized across the spectrum. There are many paths, some of which exist in other publications than academic journals and where public comments weigh heavy across a spectrum from funders, authors, content and related institutional parties. Cui Bono
tom abeles, cofounding editor, retired, On the Horizon
While I 100% agree with the concept that we would be better off with fewer, better research papers, I struggle to understand how to translate that into real world implementation. What exactly would it mean? We have more people doing more science now than ever before, more funding for that science, and a much more equitable and inclusive research community that has grown globally rather than being limited to a small number of wealthy countries. This is a good thing, but with that comes greater quantities of research outputs, and to be useful, those need to be made public (and ideally vetted in some manner).
Who gets to decide what is “paper-worthy”, which work is important enough to be published as a research article, and which work is to be given second-class treatment as some sort of lesser output (an unreviewed preprint or a dataset)? Does this create new inequities? Should only work from well-known labs from important institutions be good enough to put out as a paper? What happens if I run a big lab and have to decide which of my students gets to put out a paper this year — will it be based on the quality of their science or some other factor like favoritism or making my funder happy or where that student is in their career progression (early career versus about to hit the job market)? And sorry to everyone else in the lab, this isn’t your year, your career is on hold.
Does shifting the same materials out of journals really reduce the workload and information overload? Presumably that science will still be done, and we want it read and taken forward. If instead of a million extra papers, we now have a million extra preprints and an accompanying million extra datasets, is that more sustainable and more efficient than running those same things through the filtering mechanism of journals? If I’m a scientist and I have to vet ten poor quality preprints for every one that is good and impacts my work, is that a good use of my very precious time?
Is the solution then just to have fewer people doing less science?
I think one important question that doesn’t get asked anywhere nearly enough is ‘what do we mean by good papers’. For decades, publishers have favoured papers that make big, potentially impactful claims and not focused on the rigour of the research itself.
The assumption seems to have been that researchers are responsible to their employers for the quality of the work, and that it’s the institutions’ responsibility to keep the researcher honest. Meanwhile, institutions and funders have relied on bibliometrics to decide who is doing the best work, essentially outsourcing quality control to publishers. The result has been that nobody has been checking whether the research is actually rigorous, reproducible and thorough.
Maybe what we need is a paradigm shift. Instead of going back to selecting only the most impactful and novel research like we used to do, lets take this opportunity to recalibrate and select only the most rigorous and well documented work.
Hi David.
Clearly as a publisher I want to do everything I can support great research. This isn’t about trying to limit that. But the world is not binary – the choices are not either everything stays the same or we have to discard both the baby and the bath water.
There are in my mind 3 categories content might fall into. Firstly as we all know there is a rise in totally fraudulent content. We need to work with academic communities to remove the incentives for that content to be created. Not a quick fix as many have spoken about this for years, but is absolutely fundamental.
Then at the other extreme is the wealth of great content that benefits from being published in journals that provide real value to their communities and authors want to keep on doing what they do now.
But there is a 3rd category, where researchers are creating valid research but are only publishing it in a journal because they have to to meet promotion or funding criteria. They might be rejected for several journals before they end up publishing in a journal (wasting author and reviewer time and therefore cost in the system) that really isn’t seen as adding any value by their research community other than the stamp of being published in a traditional way. Or it might just not fit the model of traditional journals – research is so much more diverse now and our modes of dissemination should reflect that. Those authors should have credible choices that get recognised and don’t disadvantage them.
Change is needed. I don’t pretend to have all the answers, I just know that assuming the status quo is the only option will get us no where.
I largely agree, but I struggle with the question of how one decides which work is in the 2nd category versus the 3rd category without going through some sort of neutral review process. We have lots of low end journals that are good places to put out incremental work — and I firmly believe it is important to have that work out there and available. It could all be done in a different format, I just don’t know if that makes a significant difference toward solving the problems you present here.
“Adapting academic reward and recognition” – is absolutely on point.
Regrettably, there is almost zero appetite to make this happen:
1) Publishers greatly profit from the existing system and therefore have almost zero economic motivation to change. Who can blame them?
2) Research funders are complacently satisfied with existing measures of “impact” since it gives them a low-effort veneer of efficiency (“the research we fund appears in high impact journals” etc.).
3) Today’s senior researchers secured their positions of power based on the existing reward system. It’s unrealistic to ask them to question and change the system that lifted them up.
Only some sort of shock (such as the collapse of the volunteer culture) might cause these dynamics to change.
the shock tha you seek is emergent when authors choose to publish in non traditional venues where the relevance is rewarded differently rather than collecting “stamps” (rember green stamps) for acdemic journals
The system, as I said above, is leaky, and like a river, when it rises over the bank; it creates the channels anew. It will wash out the detritus and the weakened levees, and those who try to pile sand bags.
There are substantive numbers growing public, and private sources that seek such materials and release them to rise above the intellectual smog unbound as the traditional seekers grind the worn and deepening rut.
> “Adapting academic reward and recognition” – is absolutely on point.
> Regrettably, there is almost zero appetite to make this happen:
> 1) Publishers greatly profit from the existing system and therefore have almost zero economic motivation to change. Who can blame them?
^ Agree with you here Richard, though on that last point I think we’re now seeing threats emerge which might change those motivations: The integrity crisis, the AI-driven volume crisis.
I think part of the issues in knowledge ecosystem are that those who set the incentives often aren’t those who feel the greatest impact of the results of those incentives.
Hi Melinda. Such a huge subject for considerable reflection and discussion. There is a sort of new’ish model of publish-review-curate model with eLife Sciences (https://elifesciences.org/about/ ) in this past couple of years, that seems (at least on the surface) to be working well. As a non-profit, I’m unsure how they manage financials, but the pre-print, community review, model seems to work, though I’m only looking from the outside in and have no experience of any side.
I absolutely, however, do not want open research to be such that untrained “community” (aka, the public) take hold of egregiously rotten “academic” slop, published openly, using that to create a whole debacle of meme-driven drivel, skewing public knowledge and creating worse outcomes for everyone. The rabid anti-vax movement is just one example. The current brouhaha around protein intake, another!
If fewer systematic reviews are required to prevent that…bring it on! I also get we don’t want researchers to become stymied by the requirement to be unequivocal right up front. We know that science is a conversation. The scholarly publication flow is how we have to date, managed this conversation, the original objective being to consistently improve our knowledge so that we can improve our world. Greed got in the way there somewhere along the line. Now much of scholarly publishing is making researchers push overwhelming APC charges up a steep hill called, “Professionalism in Academia.” That sucks. Not a fan.
Can we remove greed from academic process from project to publish? Maybe not. How we – as a global society – fund trustworthy and ethical science communication is both the problem and the solution. Argh.
The Lonesome Death of Scholarly Publications
Human needs force scholars and their organizations to live in two worlds. One characterized by Emil Faber’s “Knowledge is Good,” the other by Gordon Gecko’s “Greed is Good.” Finding the right balance is the near insuperable and thankless task left to university administrators who must, through diplomacy and fleetness of thought, navigate the academy through the treacherous time in which we now find ourselves. Nonetheless, organizations created by scholars can set their own agenda: if no one comes to their events then they close shop — a noble experiment tried and failed. When such learned societies, however, decide to follow Gecko, there is no turning back. Profit becomes the key performance indicator (KPI) upon which the society’s staff is rewarded and Goodhart’s law prevails. What could go wrong? How can a learned society create chaos or wreak havoc on the world?
Not all, but many such organizations publish scholarly journals. Initially unfettered by the demands of financial gains, these society publications could choose to embody high standards of scholarship which not only include correctness, accuracy, and reproducibility but that also advance Faber’s knowledge. The notion of an advance is subjective, but the peer review process can be designed to ensure that truth and facts prevail. When Gecko’s financial perspective leads society publishers to inhabit the mindset of commercial publishers, those societies lose face. We well know that groups attach themselves to slogans (“Restoring America’s Promise”, “Democracy Dies in Darkness”, “Science is Our Only Shareholder”, …) yet take actions that betray their so-called values. When society journals become predatory in terms of pricing, “selectivity”, or programs that “accelerate” the review process, they inflict wounds and scars on the dignity of the tens of thousands of scholars who, in the past, have contributed their time and wisdom to producing, reviewing, and editing the content that brought esteem to those publications. Now is the time for your tears.
Mandy’ final comment: “Our conclusion is that radical change is required including that the total number of journal articles being published needs to be reduced. We need to be publishing less, but better.
——–
Several comments have pointed out that the gateway for an article IS “peer review”. Here the gatekeepers currently do not have the same modern armamentarium used in current research in the same or different emergent fields. The fact that the reviewers are stretched thin and need more assets to do their work points to THE major fault in how they are vetted and supported in order to address Mandy’s concern, and it questions whether there is a gapping hole in how open access is also funded and vetted.
Mandy,
>3M articles is entirely accurate, but the link in the post is just for S&E articles. I suspect the total output is around 5 or 6M, which is even more staggering.
I think that points one and four are almost diametrically opposed, and that this opposition will have to be resolved before any real progress can be made.
As it stands, the academic rewards system is based almost entirely on quality, not quantity, but “quality” is defined by the single metric of acceptance for publication after peer-review in an archival quality journal. The universities don’t care how many papers you post online in a preprint server; they care about how many papers you publish in a peer-reviewed journal. That need to publish in such a journal is what drives the overloading of the peer-review process, resulting in a situation where even the best of papers struggle to find willing reviewers and far too many mediocre papers are published due to a lack of real expert peer-review.
I don’t know that there is a solution to that problem outside of the research community and their funders accepting the idea that research not published in an archival journal still holds real value, but I don’t think it’s the journals that need to change their practices.