When Work Organisation, Labour and Globalisation announced it was being wound down in 2025, the decision stunned many in the open science community. In an editorial letter published that April, the journal’s founding editor wrote, “Our 20th year will be our last … we cannot see any way that the journal can survive sustainably on the basis of gift labour.” The journal, which had recently embraced a diamond open access model with no fees for authors or readers found that removing subscription income left no stable revenue to sustain more than 2,000 hours of unpaid editorial work each year. The experience served as a stark warning: the scholarly community risks embracing a “utopian ideal” of free publishing without grappling with the structural realities that make it viable.

This tension, between ideal and infrastructure, between the dream of equity and the reality of uneven capacity, lies at the heart of the diamond open access (OA) conversation today. As the world celebrates Open Access Week later this month, it is worth asking: is “free” truly fair, and for whom?

illustration of a shattered, multicolored diamond on a black background

The Diamond Promise: Equity Without Paywalls

Diamond OA is often described as the most equitable form of scholarly communication. Unlike gold OA, which typically charges authors article processing charges (APCs), diamond OA removes financial barriers on both sides: readers pay nothing to access content, and authors pay nothing to publish it. Ownership often rests with universities, scholarly societies, or library consortia, rather than commercial publishers, and journals are frequently community-driven and mission-focused.

It is also more common than many assume. According to the influential OA Diamond Journals Study (cOAlition S / Science Europe, 2021), there are between 17,000 and 29,000 diamond journals worldwide — a substantial share of the global scholarly publishing ecosystem. Most operate on lean budgets: more than half spend under €10,000 per year, and roughly a quarter report running at a loss. They are, in many ways, the grassroots expression of open science ideals: volunteer-run, non-profit, and designed to maximize access and participation.

In Latin America, this model has flourished. As much as 90% of open access journals in the region operate without APCs, reflecting decades-long investments in community publishing infrastructure. But even in regions where diamond OA dominates, sustainability remains a central concern, and the picture elsewhere is far more uneven.

Unequal Realities: A Tale of Two Worlds

The adoption and impact of diamond OA diverge sharply across geographies. In Africa, a recent EIFL survey of 149 diamond OA journals revealed that 75% cited financial constraints as their biggest challenge, and nearly half reported a shortage of human resources. Without stable funding, many journals struggle to implement editorial best practices, maintain production standards, or invest in technologies like XML workflows and long-term archiving.

These challenges are not confined to one continent. A 2023 study of the German diamond OA landscape found that 23 journals had ceased operations, underscoring that financial instability is not a Global South problem alone. The difference is that in resource-constrained settings, the margin for error is much thinner, and the consequences of failure, more severe.

South Asia offers another revealing case. In Pakistan, roughly 90% of scholarly journals operate under some form of open access, and 69% of those are diamond OA. Yet many of these journals lack impact factors, indexing coverage, or technical infrastructure, factors that limit their visibility and credibility on the global stage. A 2025 survey of Pakistani library professionals found widespread support for the principle of diamond OA, but also deep concern about its long-term sustainability and discoverability. These findings mirror trends across the Global South, where resource constraints, limited digital infrastructure, and dependence on volunteer labor pose significant obstacles to scaling diamond models.

The result is a paradox: in regions where the promise of diamond OA is most urgently needed to break down paywalls and amplify underrepresented scholarship, the conditions for sustaining it are least available.

Visibility and Infrastructure: Access Alone Is Not Enough

Diamond OA’s equity promise falters if published work remains invisible. A 2024 analysis by Simard et al., found that diamond journals are significantly underrepresented in major indexing services like Scopus and Web of Science compared to their gold OA counterparts. Many of these journals operate at national or regional scales, publish in local languages, or lack resources to meet technical standards — all factors that reduce their visibility in global research ecosystems.

Compliance with emerging policy frameworks is another hurdle. Only 4.3% of diamond journals meet all Plan S technical criteria, and about 75% still deliver content only in PDF format, without machine-readable XML or HTML versions. These gaps make it harder for journals to participate in interoperable scholarly infrastructures, and harder for their content to be discovered, cited, or reused.

When “Free” Isn’t Fair: The Hidden Costs of Diamond OA

The “no-fee” model often conceals significant costs. Editorial labor, copyediting, typesetting, hosting, preservation — none of these are free. When no revenue is collected from subscriptions or APCs, someone must absorb the expense. Often, that “someone” is a small editorial team working unpaid, a university department stretching its limited budget, or a scholarly society relying on volunteer time.

The closure of Work Organisation, Labour and Globalisation illustrates this point starkly. Its diamond OA pivot aligned perfectly with open science principles, but without sustainable funding, the model collapsed under the weight of uncompensated work.

By contrast, some well-resourced systems are experimenting with structured funding. In 2024, the Dutch research council NWO launched grants of up to €50,000 to support journals flipping to diamond OA, recognizing that transitions are costly and require financial planning. Similar initiatives, such as NSF-funded programs at MIT, provide crucial short-term support for the transition phase. However, most of these grants last only a few years, leaving journals facing significant uncertainty once the initial funding ends. This underscores a critical gap between facilitating a transition and ensuring long-term sustainability.

Building a Fairer Future: From Celebration to Collaboration

If diamond OA is to fulfil its equity promise, it cannot rely solely on idealism. The Toluca–Cape Town Declaration (2024), emerging from the 2nd Global Summit on Diamond Open Access, calls for a coordinated global effort to strengthen the model, including sustainable funding mechanisms, shared infrastructure, common quality standards, and collaborative governance structures.

Several policy recommendations emerge from recent research:

  • Shared Infrastructure: Pooled publishing platforms and shared technical services can reduce costs and raise quality across many small journals.
  • Institutional Funding: Universities and governments should integrate diamond OA support into core research funding, rather than relying on short-term grants.
  • Indexing Inclusion: Repositories, indexing services, and bibliometric systems must adapt to better represent regionally focused and non-English diamond journals.
  • Capacity Building: Training in digital publishing, metadata standards, and editorial best practices can help journals meet global technical criteria.

These steps require global collaboration, not just financial investment from the Global North, but also agency and leadership from the Global South in shaping future models. The goal is not to abandon the diamond ideal, but to ensure that it rests on a stable and equitable foundation.

A Conversation in Motion

The story of diamond open access is still being written. Its rise reflects a powerful shift toward scholarly communication as a public good, one not gated by commercial profit or author wealth. But the stories from Africa, Germany, Pakistan, and beyond remind us that ideals alone cannot sustain journals. “Free” is never free. Someone pays the price, in money, in time, in visibility, and too often, that someone is an under-resourced editor or institution in the Global South.

As Open Access Week approaches, perhaps the most important question is not how many journals are diamond, but how many can endure, and what we, as a global community, are willing to do to ensure they do. Access is the first step, where Equity and sustainability must follow.

This post is part of a broader conversation I’ll be exploring throughout Open Access Week, on how we can move beyond paywalls and processing charges to build a more inclusive, resilient future for scholarly publishing. I invite you to join that conversation, and to rethink what “open” truly means.

Maryam Sayab

Maryam Sayab

Maryam Sayab is the Director of Communications at the Asian Council of Science Editors (ACSE) and Co-Chair of Peer Review Week. She also serves on the Editorial Committee of Katina, contributing to its Open Access Knowledge section. With a background rooted in research integrity and publication ethics, she actively works to advance regional conversations around responsible peer review, transparent editorial practices, and inclusive open science. Maryam is dedicated to building bridges between global publishing standards and the practical realities faced by researchers and editors, particularly across Asia and the Arab world. She also supports initiatives that strengthen community-driven collaboration, ethical scholarship, and the sustainable development of research ecosystems.

Discussion

52 Thoughts on "Diamond Dreams, Unequal Realities: The Promise and Pitfalls of No-APC Open Access"

So-called diamond OA (I think under BOAI, gold was not originally intended to mean APC) faces many challenges, which this post gives us a taste of.

Academics can be very sceptical of diamond OA because they do not understand any of the scholarly infrastructure issues behind it. For example, preservation is raised here as something diamond OA often lacks, but it is actually not that expensive or difficult on its own. However, academics will not know what you’re talking about if you tell them your diamond journal is safely preserved with PKP PN and CLOCKSS.

This keeps diamond locked in a bind – academics don’t see the value in them, so they don’t publish, read or cite them. The scholarly infrastructure is therefore not built to support or resource them. Diamond is therefore not equipped to keep up with evolving standards which helps to keep them locked out. Without significant political will from academia (something academia generally lacks), I don’t see this changing.

Thanks, that’s such a sharp observation. The gap between infrastructure and academic perception really does keep diamond OA stuck in a loop. Visibility and trust are as critical as funding here, and hearing this from others in the community really resonates.

If funding agencies treat publishing as an integral part of research, then, they can provide a stable, on-going form of financing as they provide a stable stream of funding to support other research activities. The rise of the ORE platform in Europe reflects this philosophy; so does Redalyc in Mexico and more generally in Latin America. And collaboration across countries and continents can help matters further. The European Digital Capacity Hub is one example of this attitude.
The beauty of the diamond model of scholarly publishing is that it forces all of us to think about its financing outside the limits of market mechanisms, and solutions are multiplying.

One of the problems we face here in Germany is that our major funding organization, the DFG (German Research Foundation), refuses to fund ongoing infrastructure costs — this is considered the responsibility of universities and other academic institutions. This means that providing a stable stream of funding is something they will not consider at the moment. Even the money they are providing to fund SeDOA (https://diamond-open-access.de/en/sedoa/) is limited, and the expectation (as far as I understand it, at least) is that they will eventually have to secure all their funding independently of the DFG.

By contrast, paying an APC for a single publication connected to a DFG-funded research project fits neatly into their funding model — no ongoing infrastructure costs, just a one-time expense clearly linked to a discrete project. I won’t generalize, but my suspicion is that the DFG may not be alone in this approach to funding. At least in Germany, the funding of ongoing infrastructure costs will require either individual institutions to step up or a sea change in how the DFG approaches its funding.

Josh, I think you make a key point, or as Kurt Vonnegut once said, “Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.”

Funders are often very good about paying to build things, but once built, they lose interest and move on to the next shiny thing. There was a great study about this a few years ago that I wrote about here in the Kitchen:
https://scholarlykitchen.sspnet.org/2019/08/01/building-for-the-long-term-why-business-strategies-are-needed-for-community-owned-infrastructure/

Also, there’s often this misguided notion out there that the big commercial publishers are somehow “tricking” researchers into publishing in and reading their journals, but in reality, they’re actually really good at their jobs, which is providing a product that the market desires. One of the reasons for this is that they are willing to invest the capital necessary to create and maintain the products and services that the market demands. Look at all the recent policies (particularly in the US) that require the use of persistent identifiers (PIDs). I’ve seen a few funders looking to help build those PIDs but I’ve yet to see one commit to supporting any of these systems over the long term.

Meanwhile Elsevier alone employs 2500 technologists and spends more than $1.6 billion per year on technology (https://www.ce-strategy.com/the-brief/diamond/#1). And Elsevier represents only 14% of the total market! Compare that to ORE’s proposed Diamond OA budget of €9 million per year and you get a sense of why public infrastructure is not quite there.

Beside the obvious reverence to a “market”, David Crotty gives us an interesting example of the compared costs of private development versus public projects. ORE cannot do what the Elsevier platform does, at least not yet, but it would not cost 1.6 billion dollars either. The most efficient form of spending is not always market-driven, and scholarly publishing is a good example of this situation. The Elsevier example is a wonderful display of waste which you readily recognize if you are not besotted by Friedman and Hayek.

Call it a “market”, call it a “community”, either way it’s a group of people who have needs to be met, and those needs can be met through a variety of approaches. Perhaps Elsevier is “wasteful”, but whatever they’re doing, it seems to be highly attractive to the community. Open Research Europe has published a total of 1,742 articles in the last five years. That’s just over 0.2% the number Elsevier published in 2024 alone (and remember Elsevier published only 14% of the total literature that year). So clearly Elsevier is doing something that is attractive to authors that ORE is not (or perhaps ORE simply lacks the investment or capacity to meet the needs of the community).

But the greater point still stands. Authors, Universities, Funders, Governments, and the general public all want a lot out of a scholarly communication system. So far, funders seem unwilling to pay for those things, at least beyond their initial build (and that’s only in a very small number of circumstances). See also what’s happening in the US right now and perhaps it’s worth thinking about how much we want governments paying directly for and hence directly controlling the communication of information.

Thank you, Josh, for these interesting comments. What you describe is a situation that I have heard about elsewhere, including here in Canada. For the funding agencies, the problem should be the following: APCs is a price that covers several kinds of costs, some of which are not the funding agencies’ responsibility. An APC obviously covers the costs – I mean the real, effective costs, here – of publishing plus the costs of prestige-seeking on the part of the researchers. Publishing costs, in my opinion, and this is a thesis that funders themselves have adopted for a very long time now, should be covered. Prestige-seeking costs are not the responsibility of funding agencies. They are the responsibility of researchers and their institutions if the latter are obsessed with rankings. Funding agencies should be concerned with the quality of the work (which should not be evaluated through the citation-chasing characteristics of WoS-indexed journals) and with the alignment of the research results with their research programme, not with the prestige of the researchers.
How can funding agencies evaluate the real costs of publishing? This is a good question, but figures do exist for some (generally small) publishers. Funding agencies can start with that information. Many publishers will object, of course, but then the request is simple: open your books and show us what the cost of publishing is for you. Forensic accounting techniques will do the rest.
Funding agencies are clearly re-evaluating their role in the research eco-system, and they have come to understand that they have money and clout. Together, they have ample resources to react to big multinational publishers. And they are not – I repeat “not” – ranked! This is most important. Furthermore, charities among them are multinational too. This is where the real importance of cOAlition S lies, not in Plan S which was ill conceived anyway (despite good intentions, but hell is paved with such…).

Thank you again for your remarks.

To David Crotty: Markets and communities are not the same things. Confusing the two is exactly the problem.

About governments paying directly for publishing, they already pay directly for research and publishing is part of research. Paying for does not necessarily mean “controlling” even though it is a possibility if the governance is not well designed. The US government was paying for PBS and NPR but it still felt that they did not respond correctly to their desires (or attempts to control).

I was thinking more of the shuttering of all the CDC- and NIH-run journals, as well as the Morbidity & Mortality Report, and all the climate data that has disappeared. Is PubMed Central next? I think that the certification and dissemination of research results should be handled by neutral third parties, rather than political entities (and by that I also include many non-governmental research funders) who have an inherent conflict of interest. Which is not to say that commercial entities are the only route to achieving this. I’ve long held that the research community should own and control its own means of information dissemination (and at least before becoming a consultant have only ever worked for non-profit publishers that are parts of research institutions). The problem is that most of the research community is not really good at a lot of the important aspects of publishing, and governance of such organizations puts them at a disadvantage that has led to the dominance of commercial players (see Joe Esposito’s classic 2011 piece: https://scholarlykitchen.sspnet.org/2011/10/24/governance-and-the-not-for-profit-publisher/).

But I’d still rather have Elsevier and Wiley doing what they do than Elon Musk or RFK Jr. dictating what information is made available.

Elsevier or Elon Musk? What a choice! Either entity or person can do about as much harm as the other. When I review the lobbying efforts of Elsevier between 1999 and now, I have little to rejoice about.
As for Esposito’s “classic” piece, his concluding remarks show another mesmerized stance with regard to markets. NFPs (as he calls them) compete with for-profits, “like it or not”. The point here is: what are the rules of competition? The answer is simple: in the 1980s, the rules of competition were reorganized and structured by the impact factor. Get rid of the IF, and the competition changes altogether, as Springer warned in 2018 in their prospectus for a failed IPO. Markets are not transcendental objects emerging on the 8th day of creation; they are organized rules of competition.

Let us now look at “neutral parties”: the profit-seeking motive is certainly orthogonal to the quest for validated knowledge, but the divergence of objectives does not equate with neutrality and certainly does not guarantee it. When Nature decided to publish a weird paper on the memory of water, it was because Nature was looking for maximum visibility, not maximum quality. Some people I have heard or read, mention the existence of a firewall between the intellectual and the financial sides of a scholarly journal. Saying as much is equivalent to admitting the existence of a potential problem. Not clarifying the nature of the putative firewall and how it works creates further difficulties.

Public oversight of anything is not a sure guarantee of perfect behaviour, but reasonably democratic governments are submitted to periodic elections. The leaders of Elsevier are also accountable to their stockholders, but the latter seek profits, not intellectual quality. The link between commercial profits and the quality of published research results is certainly not guaranteed by the impact factor.

All of which is why I’d rather see the research institutions and the community via their research societies in charge of things, as both seem to have more longevity and stability than our increasingly fragile political organizations, as democratic governments that are submitted to periodic elections are increasingly an endangered species.

None of which solves the problem of perpetual underinvestment in infrastructure and maintenance, however.

I really appreciate how both of you are surfacing two sides of the same issue. David’s point about chronic underinvestment in maintenance and Jean-Claude’s emphasis on public responsibility actually intersect more than they diverge.

Yes, markets can scale fast, but they rarely prioritize what isn’t directly profitable, and that’s where stable public or community-backed investment becomes essential. On the other hand, relying on public systems alone without clear, long-term governance also leaves things fragile.

Diamond OA lives right in the middle of that tension. It exposes how much of scholarly communication depends on shared infrastructure that everyone uses but no one quite wants to fund. That’s the gap we need to close, not just technically, but politically and culturally, too.

Maryam Sayab’s article offers a very clear and honest look at the growing tension between the ideals of Diamond Open Access and the difficult realities of sustaining it. I fully agree with her observations on infrastructure gaps, funding challenges, and issues of visibility.

Today, the global scientific publishing landscape is largely shaped by a handful of powerful commercial publishers such as Elsevier, Springer Nature, Wiley, and Taylor & Francis. These companies not only act as intermediaries but effectively steer how research outputs are shared and valued. In practice, researchers are often required to pay substantial APCs to make their publicly funded work visible. While such costs can be absorbed in high-income countries, they are simply not feasible for many researchers in low- and middle-income settings. This imbalance deepens existing inequalities in global knowledge production and dissemination.

Diamond Open Access offers a meaningful alternative to this system, but its fragility cannot be overlooked. The lack of stable funding, limited technical capacity, indexing barriers, and reliance on volunteer labor make these journals vulnerable. This is not merely about publishing models—it is fundamentally about who holds power over the circulation of knowledge.

Several steps could help address these challenges:

Sustainable funding mechanisms: Long-term institutional or public funding is needed to move beyond short-term grants.

Shared technical infrastructure: Regional platforms could reduce costs and support smaller journals.

Better visibility and indexing: Specific initiatives are required to ensure diamond journals are represented in major databases.

Capacity building: Training and support can help meet technical and editorial standards.

For countries like Türkiye, this is not just an access issue—it’s a matter of science policy. Strengthening national journals, protecting their diamond OA identity, and ensuring greater visibility are essential steps toward a more balanced and equitable publishing landscape.

Thank you, Taner, I really appreciate this thoughtful reflection. You’ve captured the power imbalance and the stakes for countries like Türkiye so clearly. I couldn’t agree more on the need for sustainable, locally anchored solutions.

Kudos for these remarks. I will add one point: WoS and SCOPUS indices are instruments designed not only to index, but also to exclude. This form of power has to be met, especially in the countries suffering from these exclusionary tactics.
One important development is the latest iteration of ORE at the European Commission. This platform is going to be based on OJS and its requirements for OJS are going to be part and parcel of the next version of OJS. This means that ORE is aligning itself with tens of thousands of journals running on OJS. Now, with such a base, designing a new indexing structure should become easier. Then, various criteria responding to various notions of quality (or relevance to specific contexts) can be developed.

I help to run a diamond open access journal, and we recently published a paper about our experience at https://arxiv.org/abs/2504.10424 The key to success is to absolutely minimize the amount of human labor involved. If you allow authors to submit Microsoft Word but then you build your workflow to require XML, then you have automatically boxed yourself into a situation where you need too much human labor to accomplish the tasks. In this case it’s because the underlying technology is not up to the task, and nobody has created tools that are acceptable to both authors and editors. The standard argument for this lack of good tools is that the market is too small to justify the economic cost of their development, but that ignores the reality that a lot of the information technology world already runs on open source tools. A study by researchers at Harvard Business School found that open source software has produced about 8.8 trillion dollars of value in this economy, but the publishing world has not been the beneficiary yet. 95% of that value has been produced by only 3000 software developers.

In our world, authors create LaTeX documents, which is an open source document processing system that is 40 years old and still going strong. This gave us a significant advantage because metadata handling can be automated, and the output from an author-produced document is easy to transform into a well-typeset PDF. Our pipeline also automatically produces the XML for crossref. The demand for HTML is to satisfy two goals, namely reading on small screens and accessibility for the sight-impaired. The latter can be addressed by accessible PDF, which the LaTeX team is working hard on. Reading on small screens is somewhat less important in mathematics but the same accessibility effort is going to result in sufficiently well rendered HTML. arXiv is already processing 20,000 articles a month with almost no human intervention, and they are making great strides through their automated LaTeX => HTML pipeline. (see https://scholarlykitchen.sspnet.org/2024/01/23/stephanie-orphan-arxiv/)

All of this was enabled through the use of LaTeX, but of course that may not be the natural choice for non-STEM fields. Authors and publishers should ask themselves why they are stuck using something as archaic as Microsoft Word rather than a more modern replacement that is both open source and designed for publishing. Funding agencies and non-profits should be asking themselves how they can fund the development of better open source publishing tools that are sustainable. Publishing does not need to be so labor-intensive, and that will result in a much more sustainable future. Of course a lot of people who are readers of this blog will object because they are the people being paid to do this work. We used to pay Linotype operators, but that job went away. A lot of the jobs in publishing will probably also go away eventually. Like it or not, that is the path to sustainable diamond open access publishing.

This really highlights how much sustainability depends on reducing the hidden labor in publishing. The point about tooling is especially important; without better open-source solutions, many diamond OA journals will remain trapped in labor-heavy workflows.

I can not imagine the sustainability of Diamond OA without fairly covering the operational fees of the publication process. This will require full collaboration from everyone involved in the process, from submission to publication. The authors should take care of the quality, and institutions should share some funds based on the volume of submissions from their researchers. All communities related to the process should share some funds based on their usefulness. Industries that collect ideas from the published research should share some funds, too. Fair distribution of funds is essential to make possible, as a first step, the sustainability of Diamond OA.

Exactly. Without fair cost-sharing across the ecosystem, the promise of Diamond OA will remain fragile. Collaboration has to match the ambition.

Neither type of publishing can be considered the better option. Each has its role and suitable purpose. Therefore, It is important to find a balance. A balance that fits researchers needs and means collaboration to improve publishing. Despite the challenges, the goal remains to increase their access to knowledge. Ther aim is to support the publication of research in a way that achieves balance. Balance between protecting rights and ease of access. In open access, authors’ rights remain protected, but it makes content usage easier

I completely agree, it’s really not about one model being “better” than the other. The real value lies in finding that middle ground where access, rights, and sustainability can coexist meaningfully.

It is actually very difficult to understand why science funders would not be prepared to continuously fund successful DOA journals. We (i.e. a working group of the German National Academy Leopoldina) have recently published a discussion paper where we take this point heads on: journals need to be funded like a scientific infrastructure, analogous to libraries or databases: see https://www.leopoldina.org/en/publications/detailview/publication/a-new-concept-for-the-direct-funding-and-evaluation-of-scientific-journals-2025/
Of course, this means that journals need also to be scientifically evaluated to justify the funding. Evaluation systems are applied to institutions, infrastructures, consortia, individual scientists, and grant applications, nearly all of which rely heavily on publication output in scientific journals. Yet, paradoxically, there is no formal mechanism for evaluating the journals themselves. If this would be established, funders could easily switch to sustainable funding of journals.

Thank you very much for this point. This is exactly how to proceed. Publishing is part of the research cycle and no one worries about the continuity of support for research. If funding agencies also include the COST of publishing (not the price which is highly inflated and responds to profit or surplus imperatives), then the issue of continuity – what the publishers like to call sustainability – evaporates. The issue of COST is crucial here: APCs cover both the cost of publishing and the price of reputation. Funding agencies should not be concerned with the latter, and they should openly set up their own criteria for evaluation without falling into the trap of a universal form of evaluation made possible by journal rankings. Funding agencies should directly examine the quality of the work done with their financing, and they should also examine if it aligns with the objectives of their research programme. None of this is related to reputation.
The organizations most obsessed with reputation are the organizations that manage reputation through some mechanisms such as an algorithm for ranking purposes. Their power, including earning power, depends on it. Ask Clarivate!

Publishing is part of the research cycle and no one worries about the continuity of support for research

There’s a lot to unpack in that sentence, and perhaps it’s worth considering further. First, everyone worries about the continuity of funding for research. We are seeing project after project in the US shut down midstream, clinical trials canceled leaving patients without treatment. New policies from the administration allow for grants to be defunded at any point after they are issued. Carl Bergstrom writes (https://bsky.app/profile/carlbergstrom.com/post/3lvu364znrc2b) that this is devastating because scientific programs are planned on a long term scale, and so the ability to yank money at any point will result in all research becoming short-term incremental studies. Further, this lack of continuity of support makes it impossible to train graduate students as one can no longer ensure their positions in the lab long enough to graduate.

That said, it’s also worth thinking about the timescales involved in a research grant versus the timescales needed for research communication infrastructure. A set of experiments is a finite thing, and a reasonable time scale for a grant may be 5 years, 10 years. Scholarly communication is about the long term recording and preservation of the historical record of research. It needs to exist essentially forever, and unlike a research project where one collects a result and then moves on to the next experiment, in scholarly communication, everything that has been done in the past needs still needs to be supported on an ongoing basis. It’s often an unmentioned problem with the APC model (and for example, PeerJ’s old membership model) — you get paid once for a paper that you now have to maintain forever with no further support.

Which gets back to the central problem of funders being good at building things and terrible at maintaining things. See earlier this month when Flybase, an essential genetics community resource had to procure emergency funds to continue existing, with its long term future remaining at risk. https://www.researchprofessionalnews.com/rr-news-uk-research-councils-2025-10-genetics-database-hit-by-trump-cuts-bailed-out-by-wellcome/

That really resonates. I think this is where the difference between funding research and funding the infrastructure that sustains its record becomes stark. Research can be cyclical and grant-based, but publishing infrastructure isn’t something that can be switched on and off; it has to endure long after individual projects end. That gap between how we fund research and how we sustain its outputs is exactly what keeps surfacing across so many of these conversations.

Precisely, but this is is also the result of a historical shift. Funding agencies did not use to rely on a competitive system based on projects. It supported programmes and teams. If that situation still prevailed. there would be no obstacle to funding infrastructures in their mandates. The shift to the project-based competition emerged in the ’70s and particularly during the Reagan-Thatcher years. It was part of the idea that the best way to extract results from researchers is to submit them to as intense a competition structure as is possible. In so doing, cooperation was debased and the whole research process was left gravely unbalanced. Publishers simply took notice and so did Garfield. He shifted the SCI from a bibliographical tool and a tool to study the life of scientific ideas to a tool to identify elites and “excellence”. Derek deSolla Price helped him on that score, and Merton did provide some mild warnings, but the construction of the particular competitive market needed to sell scientific journals had made an immense step forward. Remember that the challenge was to find a way for entities that hold a monopoly on their content to compete with each other!

If funding agencies decided to allocate resources to supporting collaborative activities, including publishing platforms, a first corrective step would be achieved. The ORE platform in Europe is an example of this kind of effort, even though the modesty of the effort needs to be underscored. More than that will be needed.

Interesting provincial anecdotes these, but they are only anecdotes. Some people worry about some research funding somewhere at all times, not everyone. I remove my own “no one” in an earlier comment.

At least since the “Scientific Revolution”, research has been constantly provided for, and many of the institutions financially supporting research also support publications. If only for military reasons – and there are many other reasons to invoke – research has been supported for a very long time, and will continue to be supported. The present US situation is interesting but, hopefully, it is also anomalous. And one should not confuse the USA with the whole world.

The long-term preservation of the scientific record is also an interesting issue. For one thing, companies die. If I remember correctly, Elsevier itself recognized that it could not guarantee perpetual preservation for its own publications; it proceeded to negotiate some level of preservation with the Royal Library of the Netherlands. Libraries are indeed better institutions to preserve than private companies. LOCKSS, for exemple, is a better system than any private companies. Libraries are involved in LOCKSS, not private publishers.

If publishing is conceived as an integral part of the research cycle, and if granting agencies develop their own publishing tools, either alone or, better, in a network, publications produced in this context will endure at least as well as a private company, and they will also be preserved at least as well. Nothing human is eternal, least of all private companies.

I see where you’re coming from, Jean, and I agree that preservation tied to research infrastructure has a different kind of stability than relying on commercial models alone. Companies can and do disappear, but well-structured networks of libraries, consortia, or public systems tend to offer more reliable long-term stewardship.

It also feels like we sometimes underestimate how much the way we define publishing itself shapes these preservation strategies. When it’s treated as an integral part of research — not just a service layer — it opens the door to more sustainable and collective solutions. That shift in framing seems just as important as the funding models behind it.

Thank you, Maryam (if I may). The point I have been making all along is that the research process and its results should be under the stewardship and control of the research communities and institutions. As for commercial entities, they may be useful at times, of course, but they should be limited to providing some specific services to the research process without impinging upon the research process. Think of a laboratory needing test tubes. It is probable that it is better to buy them than to make them in the lab. However, imagine now the same lab buying test tubes but the evaluation of the research results is tied to the brand of the test tube! Just imagine…

That’s such a clear way to put it. I really like the “test tube” analogy; it captures the difference between relying on commercial services and giving them influence over the core of the research process. The idea of stewardship staying with the research community feels not just logical but necessary.

The idea of No-APC Open Access is admirable, as it promotes the free exchange of knowledge. However, the reality is that many journals still impose article processing charges (APCs) that are often unreasonably high. While it is understandable that publishers need to cover operational costs, the fees should remain reasonable and aligned with the spirit of scientific sharing and collaboration. Unfortunately, the growing emphasis on publications for university rankings has created a system where publishers can exploit this demand, resulting in inequity between researchers with ample funding and those with limited resources. This situation risks reinforcing a divide between “rich” and “poor” scientists, which ultimately undermines the inclusivity and fairness that open access seeks to achieve.

Spot on! Especially the penultimate sentence. And you could add that the citation chasing scheme is even affecting the choice of problems by researchers. When you mix two imperatives – knowledge and profit – you end up with a very messy situation. And for those who advocate market mechanisms blindly, as the best way to allocate rare resources, they should keep in mind that a market implies rules of competition, and that these rules can be constructed quite artificially (like the citation-chasing scheme of the impact factor). Whenever you invoke a market, examine its competition rules: that is where the very human side of markets appears most clearly.

Thanks for this thoughtful post, Maryam!

A trend I notice here is a focus on funder or government financial assistance, which is certainly a viable route to Diamond OA, but it’s not the only one.

With that said, I think it’s essential to consider the needs of publishers in context (something I discussed in a Scholastica OA Week blog https://blog.scholasticahq.com/post/cooperative-oa-journal-models/). Nonprofit organizations may wish to subsidize Diamond publications for their communities. In such cases, there may be partnership opportunities, such as sister societies teaming up to share the costs or library partnerships (even cross-globally) via existing open infrastructure (OJS, etc.) or vendor support. However, many nonprofits need to generate predictable revenue from their publishing efforts to maintain their operations. In such cases, I think it’s worth noting that the S2O model is showing promise (now with the emergence of nonprofit aggregate S2Os). Only time will tell how viable S2O is, but we won’t know unless organizations test it out.

While we’re far from finding all the answers, I think it’s worth considering models that are working in addition to aspirational frameworks to encourage experimentation. I don’t think anything in this post or thread is antithetical to that; I just mean to bring some pragmatic optimism to the comments. At the end of the day, nonprofits still need business models to employ workers (going back to the point of the flaws inherent in glorifying gift labor). I realize S2O may be more achievable for Western organizations than the Global South, but if nonprofits in the West could embrace that or other OA models to keep or bring journal publishing in-house and control their own costs/pricing models, perhaps that could inject healthy competition into the marketplace to promote diversification rather than consolidation and free up resources in other areas for other types of Diamond models to thrive (like through grants/subsidies).

You raise a great point, Danielle, about broadening the conversation beyond government or funder support. Partnership-based approaches and cooperative models among nonprofits can play a powerful role here, especially when combined with shared infrastructure.

S2O is particularly interesting in that sense because it sits at the intersection of collaboration and sustainability. While it may be more feasible in some regions than others, seeing more experimentation with models like this could help diversify the ecosystem instead of reinforcing a single funding pathway. That diversity is probably what Diamond OA needs most.

Broadening the base of financial support is fine, of course, but it remains that a solid, stable contribution from funding agencies is needed. Once again, publishing is part of the research process, so why should the funding agencies not support the cost (the “cost”, not the price) of publishing? And to make the system more neutral, better, build an international network of funders – think of cOAlition S without Plan S, or SciELO with a few tweaks – that will support publishing on their own platforms. If authors want to publish elsewhere, for impact factor reasons, let them do so, but support them at the level of cost that the public platforms display.
The point of all this is that there is no need to try protecting the oligopoly. They take care of themselves very nicely, thank you! Smaller presses from societies, universities that are non profit could also be helped by funding agencies, but only if their accounting books are fully open.

I am not really sure what to make of the discussion so far. I do understand that commercial and hence political interests are involved, but raising arguments about the costs of hosting PDF files (!) seem quite ridiculous to be frank — in this day and age when data centers are being built all over the world for computing and hosting generative AI entertainment. So, say, we have some statistics about the costs of arXiv’s infrastructure, but how come archive dot org has managed to do it for decades? How about things like Zenodo, which too probably hosts more data nowadays than commercial publishers in terms of size? That is to say, things do not really add up in the above arguments. Thus, better arguments, please!

As another commenter noted, arXiv’s budgets are public and can be seen here:
https://info.arxiv.org/about/reports/index.html
And those millions are spent every year to do just a tiny fraction of what journals do.

If you want to get a sense of what it costs to run a journal, EMBO is very transparent. They are a non-profit and essentially run their journals at a break-even level, so you can see what everything costs here: https://www.embo.org/features/the-cost-of-scientific-publishing/

From their site (https://www.embopress.org/open-access):

“For Research Articles and for Reviews eligible for OA publication support, the charge will be €6,790 / $7,990 / £5,790*. For non-commissioned Comment articles or for Comments eligible for OA publication support, the charge will be €2000 / $2390 / £1690*. ”

And then from the link you provided: they got about 2.3M from APCs and paid about 2.0M from “outsourced publishing services and digital platforms”. Now, where did this 2.0M really go? I think you know the answer already.

Paying nearly 7K for PDF hosting and automated error-prone type-setting is just plain craziness in my books. Though, granted, I suppose the people publishing with EMBO and its outsourcing are paying for prestige, brands, and basically hot air — just like many things in modern capitalism.

Hi Jukka,

I suspect you are either unaware of, or are deliberately misrepresenting the services that journals provide. If your needs are met solely by services that simply post PDFs online, then there are wonderful alternatives other than journals (e.g., lab websites, preprint servers) that should be great for meeting your needs. Others authors and readers may be seeking more.

Also a question: if hosting PDFs is so inexpensive, then why does arXiv need a $7M budget in 2025? Why does bioRxiv spend $3M per year?

Interesting rhetoric: what is the tiny fraction quantitatively? Which journal(s) are being compared to arXiv? How do you compare a setup such as arXiv with a journal, and what type of journal? Etc. etc. Is the comparison limited to EMBO?

If you can provide a quantitative framework for measuring publishing services I’d be happy to try to make that comparison. How do you quantify plagiarism checks? Image manipulation? Peer review? Without that, I would settle for “journals do a lot more than preprint servers.” Are you suggesting that is not true? I used EMBO as a comparison because they’re one of the more transparent publishers as far as their costs. PLOS also puts out an annual report. Does arXiv do all of the same things that PLOS Medicine does? arXiv doesn’t even do checks at the level of medRxiv so I think we both know the answer to that question.

One can certainly make an argument that the things PLOS or OLH or EMBO or Elsevier do are not necessary, but let’s not pretend that Nature works exactly the same as a preprint server nor that all that journals do is host PDFs online. Even preprint servers do more than that.

I agree with David’s point; journals do far more than simply host PDFs, and those layers of review, validation, and quality control are what distinguish them from repositories. This is such an interesting exchange; it really shows how tricky it is to compare publishing systems when there’s no common way to measure the work behind them. Hosting alone is rarely the full picture; what matters is how each model values and supports the human and technical effort that turns a submission into a reliable part of the research record.

The point here is that the diversity of costs can be resolved if funding agencies simply decree that costs will not be supported beyond X. If funding agencies build their own platform, they will know what it costs to them and can use that reference to cap their APC payments. As for the difference between a pre-print server and a journal, that difference varies enormously from one instance to another. The question here is: what do researchers really need? It is not what publishers claim they do (over a hundred roles, if I remember a silly discussion from some years back). The initiatives in these domains should be in the hands of the research communities, institutions and funding agencies, not publishers and especially not commercial publishers ( or publishers that behave as if they were commercial).

As I mentioned in the paper I linked, the cost of hosting the content is almost zero. The real costs in publishing are for paying humans. We pay about $20/month to host a journal’s server, but this is less than the cost of a half hour of a human’s time to administer the server. arXiv’s budget is public – see https://info.arxiv.org/about/reports/FY25_Budget_Public.pdf for example. You can also find their IRS tax form since they are a 501(c)3 non-profit, or you can look up their annual report. They recently migrated to “the cloud” and had to incur significant labor costs to do so. They don’t even touch the papers that are submitted, whereas most journal publishers invest human labor in reformatting the articles from the original source of the author (often Microsoft Word or LaTeX) to extract metadata and produce either PDF or JATS or HTML. It’s not just about “hosting a PDF” – it’s mostly about paying humans to perform administrative or IT tasks. The Internet Archive operates at a vastly larger scale than most journals or even publishers. They had expenses of $32.7M in 2023. See https://projects.propublica.org/nonprofits/organizations/943242767

You’re absolutely right that most costs lie in human labor rather than infrastructure. Hosting itself is rarely the issue; it’s the layers of editorial, technical, and quality-assurance work that make publishing sustainable and trusted.

What strikes me is that this human effort often goes unaccounted for in funding or policy conversations. Even when technology reduces certain costs, there’s still a need to value and support the people maintaining quality, accessibility, and integrity in the process. Diamond OA can only be sustainable if that human element is recognized as part of its real cost base.

“The Internet Archive operates at a vastly larger scale than most journals or even publishers. They had expenses of $32.7M in 2023.”

Thanks for this number! It really puts things into a proper perspective!

“They don’t even touch the papers that are submitted, whereas most journal publishers invest human labor in reformatting the articles from the original source of the author (often Microsoft Word or LaTeX) to extract metadata and produce either PDF or JATS or HTML.”

I am not still convinced:

1. In most LaTeX-heavy fields, including everything published by ACM and IEEE, authors themselves do all formatting.

2. And regarding some big commercial publishers, the so-called “type-setting” they do is basically about running LaTeX source files through an automation script that adds logos and layout features. That’s basically it from authors’ perspective. The rest is basically exploiting authors’ work, as is the case with training LLMs with paywall’ed data.

But I do agree that MS Word is a problem and Word-dominated fields thus contribute to the overall deadlock situation.

It’s true that in most LaTeX-heavy fields there is rather little to do in reformatting an article for a journal. The paper that I linked to is about an open source system that completely eliminates any need to touch the LaTeX for metadata or formatting (there is a shorter version to appear in the TeX User’s Group journal TUGBoat). The ACM and AMS societies have both stated in the past that they use the revenue they get from publishing to fund other activities. These large scholarly societies have staff that need to be paid in order to provide those services, and that’s the reason they cling to their publishing revenue. This leads us back to the conclusion that the thing that is expensive in publishing is human labor. Every industry has seen modernization and automation that reduces the need for human labor, including examples like agriculture, textiles, transportation, and manufacturing. The publishing industry has been slow to adopt technologies that will streamline their operations. This applies to both scholarly societies and commercial publishers. They are unfortunately also experiencing a lot of abuse from plagiarism, the use of LLMs, misrepresentation of data and results, etc. This only increases the pressure on them, but most of the responsibility will fall back on the peer review system. Peer review is not that old by historical standards, and we are currently seeing a lot of experimentation in an attempt to make it more effective (e.g, openreview.net).

I am not impressed with the anti-plagiarism services. They will soon be overwhelmed by the progress in LLMs, and they have a tendency to report too many false positives. As for peer review, there are many journals and conferences who run open source systems like OJS, Janeway, or HotCRP at almost no cost other than the IT skills required to run a server. All of these are backed by companies that will run them for you at low cost.

Both perspectives make sense here; and together they capture how uneven the economics and workflows of publishing really are. In LaTeX-heavy disciplines, automation and author-driven typesetting have already minimized much of the human labor, while Word-dependent fields still rely on intensive manual processes that slow things down and raise costs.

What this really shows, though, is how uneven the publishing landscape is, both technologically and culturally. Until we have more interoperable, open-source tools that work across disciplines, the sustainability conversation will keep circling back to labor and workflow inefficiencies rather than just access models.

Well done on this informative piece, Maryam, and for the industry discussion it sparked. This is a really timely, well-considered piece with great insights and links to sources.

Leave a Comment