This week DeltaThink released a report looking at the potential impacts of the long-awaited NIH plan to cap grant recipient spending on publication fees, particularly article processing charges (APCs) for open access (OA) publication. The report suggests the NIH’s policy (depending on how it is implemented) will result in a decline of somewhere between 0.9% and around 3.5% in the global amount paid for OA publishing. But the raw numbers don’t really tell the story of what will happen with such a policy, and it’s worth thinking about the reasons why it will be much less impactful than it seems.

The NIH is not alone in looking to change the way researchers publish their results, nor in looking to regulate publisher business practices. The Chinese Academy of Sciences has announced a policy specifically aimed at around 30 “high-profile, high-fee” OA journals including Nature Communications, Cell Reports, and Science Advances. Other funders have gone further. Cancer Research UK last month announced that they were generally fine with a 6-month embargo on papers becoming available via Green OA and that they would cease all payment for OA APCs. Since January 2025, Wellcome only pays for APCs in fully-OA journals, and at the same time, the Gates Foundation ceased APC payments across the board. Within each of these policies is an implicit (and sometimes explicit) goal of using the power of the purse to change researcher publication behavior and to restructure research communications, whether pushing authors away from journals deemed excessively expensive to eliminating subscription journals altogether.

woman bypassing labyrinth by drawing line around it as a workaround on blackboard

How effective are these policies?

A quick look at Dimensions suggests they’re perhaps not having the intended effect, and that significant numbers of researchers are finding workarounds. In 2025, there are 10,298 papers that list Wellcome funding. Despite a requirement that all results be published OA, there were still nearly 700 subscription-access-only papers. The percentage leaps in 2026 to date with 323 subscription access papers published out of a total of 3,394. Further, Wellcome specifically bans spending on hybrid-OA, yet 3,104 Wellcome papers in 2025 and 1,051 to date in 2026 were published OA in hybrid journals (more than 30% in both years).

Gates, which similarly requires OA publication and won’t pay any APC at all, shows a similar leakiness in its results. Of 2025’s 3,717 papers listing Gates funding, 425 were published as subscription articles (11.4%). More than 900 papers were published OA in hybrid journals, and the most popular destinations for Gates funded papers were journals from Elsevier, Springer Nature, and Wiley (followed by PLOS, with whom Gates has an agreement granting prepaid publication for funded authors). 2026 numbers are even more problematic in terms of compliance, with more than 20% of the year-to-date 1,150 articles listing Gates funding published as subscription-only articles. Elsevier and Springer Nature journals remain the top targets for Gates-funded authors. Gates requires researchers to preprint their articles and has built their own VeriXiv preprint server for this purpose. 2025’s numbers show only 1,305 preprints (35% of the number of articles) and 2026 shows 412 preprints (36%). Of those preprints in 2025, 459 were posted in VeriXiv (35%) and in 2026 around 39%.

What’s going on here, and why are these policies less effective than expected?

Everyone wants to keep their funder happy, right? So, why aren’t researchers moving away from hybrid journals, paying expensive APC journals (507 Wellcome and Gates papers published in Nature Communications in 2025), or paying any APCs at all?

First, as we know from the earliest OA policies, compliance is expensive and often complex to monitor. Unless the funder ruthlessly cracks down on grantees, some level of non-compliance will always be there.

But aside from some baseline of completely non-compliant authors, Lisa Hinchliffe in 2024 noted one reason why the rules aren’t carefully adhered to — individual funders do not exist in a vacuum. Many papers list funding from more than one source, and while an organization like Gates or the NIH may want to strike a blow against journals they perceive to be expensive, authors with funding from other sources can subvert their intentions and still remain in compliance. As the aphorism goes, “a slave with two masters is a free man.”

Another key workaround for funded researchers has been provided by the rise of transformative agreements (TAs). TAs, also often called read-and-publish deals, bundle payment to the publisher for read access to subscription content and payment for OA publishing for authors into a single contract. If an author is at an institution that has signed a TA with a publisher, then the APC for that author’s article is no longer coming directly out of their grant.

This means that the funder has zero control over publication spending because it is being done by the institution’s library, not the grant recipient. Most libraries operate through a variety of funding sources, ranging from tuitions and student fees to grant overheads. Even if a particular funder outlawed any of their overhead funds from going to TAs, the library could simply pay for TAs using funds from other portions of their budget. Problem solved.

And as we know from the Plan S obituary, TAs favor hybrid journals over fully OA journals, resulting in the opposite outcome that funders are aiming for here. Further, TAs favor researchers at larger, wealthier institutions that can afford them, and the largest, most profitable commercial publishers who have the scale to make TAs work. Should the NIH eventually announce their APC caps, I would expect to see a rush of TAs signed in the US, similar to the rapid uptake in the EU following the implementation of Plan S. The rich (researchers/institutions/publishers) will get richer, again — an unintended, but entirely predictable consequence.

Whose money is it anyway?

The other factor that prevents funder policies from steering researcher and publisher behavior is the disconnect between the person receiving the funding (usually the lab’s Principal Investigator or “PI”) and the person doing the actual research and writing the paper (usually a graduate student or a postdoc). The grad student/postdoc is eventually going to leave and (at least for the small percentage of researchers who stay in academia) start their own lab. But they don’t get to take the PI’s funding with them, that stays with the PI. The PI is the only one with a vested interest in keeping the funder happy. The student/postdoc gains no benefit from funder compliance and, instead is focused on doing what will provide the maximum career advancement potential — essentially publishing in the highest impact journal possible, regardless of the cost or any clash with their PI’s funder’s wishes.

I wrote about the conflict of interest this creates for lab heads back in 2019. The PI has a duty of care toward their students and postdocs, to do everything they can to help them launch a successful career. Any lab that adheres strictly to the Gates policy, for example, and declares that a preprint is the only necessary output for a research project, is going to be a lab devoid of graduate students or postdocs. Why would you go work somewhere that prohibits you from getting that Nature paper (or any paper at all) that you need to get a job?

At the same time, the PI needs to keep the lights on in their own lab and is obligated to follow their funder’s rules, regardless of how damaging it may be to their students’ career hopes. Most are left with a difficult balancing act, and that’s why we see so many workarounds that follow the letter of the law but not its spirit.

And so, as Lisa Hinchliffe wrote, these sorts of policies do little other than to increase the compliance burden for researchers and, as cOAlition S noted, often have the opposite effect from what was desired. As I’ve written elsewhere, policy is a blunt instrument which is usually goal-focused rather than process-focused. So, while the DeltaThink numbers are interesting, I’m willing to bet that the real-world impact of the NIH’s potential APC caps will be significantly smaller in terms of the reduction in overall publication spend, and significantly larger in terms of further market consolidation.

David Crotty

David Crotty

David Crotty is the Executive Director of Cold Spring Harbor Laboratory Press. Founded in 1933, CSHL Press is an internationally renowned publisher of books, journals, and electronic media, and is a division of Cold Spring Harbor Laboratory, an innovator in life science research and the education of scientists, students, and the public. Previously, David was a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

14 Thoughts on "APC Caps and Bans — Why Funder Policies Aimed at Curbing the Publishing Industry Don’t Work"

Interesting blog but I’m not clear what is the #1 reason why APC capping would not work. HIFA recently hosted a discussion sponsored by Oxxford Pharmagenesis where APC capping was one of the main recommendations. Here is another blog we published at the time: https://www.hifa.org/news/what-we-learned-about-open-access-publishing-healthcare-perspective Finally, what are the options for reducing the phenomenal APCs that some publishers charge? (Neil Pakenham-Walsh, HIFA Coordinator)

I’m not sure there’s one main reason, though if forced to choose, I’d probably go with the career advancement and funding systems in place in academia. Researchers are smart, ambitious people, and if you set a rule that interferes with their career progression, they’ll likely find a workaround that keeps you happy but lets them do what’s best for themselves. And as noted above, those workarounds are more readily available to well-funded, established researchers at well-funded institutions, thus further reinforcing existing norms and hierarchies.

APCs are, at least for a lot of journals, lower than they would need to be if the journal was reliable solely on them, and are largely subsidized by subscription and licensing revenues. In a fully OA world, APCs would likely be a lot higher and continue to rise as more and more demands are made on publishers (e.g., now we’re being asked to do extensive reference checking on all submitted papers, which requires expensive systems and lots and lots of editor time, hence more costs).

But to answer your question, one reason APCs are so high is that the articles that are accepted have to pay for the articles that are rejected. So if you reject 90% of what comes in, the 10% of articles you reject have to pay for all the work on all those rejected articles. Way back in 2016, I made a suggestion on how to fix this problem, which still makes sense to me yet remains incredibly unpopular with researchers and funders: submission fees:
https://scholarlykitchen.sspnet.org/2016/10/10/can-highly-selective-high-end-journals-survive-on-apcs/

Thank David Crotty for your response. “But to answer your question, one reason APCs are so high is that the articles that are accepted have to pay for the articles that are rejected.” I thought APCs were so high because the publisher wants to make huge profits. They will charge whatever anyone is willing to pay, regardless of how much they may spend on articles that are rejected? (And this is digressing, but presumably one of the benefits of AI would be to make it easier and cheaper to weed out most of the 90% that are not publishable.)

While yes, your main point is accurate — APCs are largely set at what the market will bear, it’s perhaps not as straightforward as an assumption that “the publisher wants to make huge profits”, as there are many different types of publishers with many different motivations. Yes, for commercial publishers, the primary duty is to one’s owners or shareholders, but is that also the case for independent, nonprofit publishers? Do we assume that eLife is greedy and seeking to make huge profits because their APC ($3000) is higher than that of Springer Nature’s Scientific Reports ($2850)? Is a publisher like AAAS greedy because they have relatively high APCs even though they allow authors to comply with funder OA policies at no cost through Green routes?

The market is not fully rational. My apocryphal understanding is that the initial PLOS APC of $3000 was largely pulled out of thin air, and that seems to have set some sort of anchor for the market over the years. And of course, as noted, hybrid journal APCs are subsidized by subscription revenue and remain artificially low.

And this is digressing, but presumably one of the benefits of AI would be to make it easier and cheaper to weed out most of the 90% that are not publishable

My impression, at least so far, is that AI systems are expensive, and then require significant amounts of human oversight, so I’m not sure they’re providing a lot of cost relief. But even so, and maybe this makes me old fashioned, I remain a firm believer in human judgment and I want expert editors and expert peer reviewers making filtering decisions rather than relying on black box word guessing machines.

Wellcome banning spending on open access for articles in hybrid journals is not the same as banning authors from publishing there. Wellcome makes available rights retention language (as do many institutions) that facilitates authors publishing in hybrid journals without paying for open access and then making the author’s accepted version of the manuscript open in a repository (Wellcome stipulate EPMC) with no embargo period and a CC-BY licence. This is in line with Wellcome policy.
This also means that the numbers of Wellcome-funded articles found in hybrid or subscription journal is irrelevant, so these numbers cannot be used to indicate that the policy isn’t working.

Sorry, to make this more clear, the numbers above listed as “hybrid” are articles that are published OA in hybrid journals, not articles that are published as subscription articles in hybrid journals (and possibly made freely available via Green routes). If Wellcome’s policy is meant to steer authors paying for OA away from paying for it in hybrid journals and only paying for it in fully OA journals, then it is clearly not working.

Thanks for this analysis David. I am wondering what percentage of the articles publishing in subscription only journals have been made available in institutional repositories. Green open access doesn’t seem to have been considered here? Apologies if I am misreading.

It’s a good question but I don’t have an easy way to derive that data. Of the 1,025 Wellcome funded articles listed as being subscription-only in 2025 and 2026, 599 of them were published in Elsevier and Springer Nature journals, which as far as I know do not allow rights retention strategy Green OA. Of the 658 Gates subscription-only articles, 415 are in journals from those publishers. But without definitive data, I can’t say much about them, and I’m more interested in things we can measure that contravene funder policy, like publishing OA in hybrid journals with a funder trying to block the use of hybrid OA or paying for expensive APCs with a funder that refuses to pay APCs at all.

This data should be available simply as Green OA in Dimensions.

“Of the 1,025 Wellcome funded articles listed as being subscription-only in 2025 and 2026, 599 of them were published in Elsevier and Springer Nature journals…”

An article showing as subscription-only will not have a Green copy in a repository (at least that OpenAlex can see), since if it did it would be showing up as Green instead. To answer Danny’s question directly, I’d look at Green OA rates by journal title (though you’d have to know which journals are subscription and which are hybrid).

I see 622 Green articles from Wellcome in 2025 or 2026 in Dimensions. 185 published with Elsevier, 91 S-N, 55 OUP. Top titles are The Lancet (19), Lancet ID (14), Science (13), Nature (12), NEJM (11)…

Thanks Eric, for this clarification. I am always a bit confused by the OA classifications in most bibliometrics databases, which I believe largely comes from the OurResearch folks. They won’t show any article twice, but I never know their current hierarchy — if there’s a subscription copy of the article available as well as a Green OA version, which do they show and which do they suppress (sounds like from the above they would show this as a Green OA article).But if there’s a Green OA version and a Bronze OA, which shows:? Or any another combination?

Also, in terms of showing articles that are Green from 2025, are a portion of those (given that it’s May) articles that have reached their one-year embargo and have been made Green via the publisher, rather than articles that were made Green immediately upon publication (as is required)?

Yes, as I understand it goes first by `is_oa`, then by the `best_oa_location` field. So in your example, a subscription article that shows `is_oa` = False would take the first exit and get classified as Closed, done. However, if OpenAlex finds an open copy (`is_oa` = True), then the cascading algorithm kicks in. If the best source is in DOAJ, it’s Gold. If the best source is a repository, it’s Green. If neither, it’s on the publisher webpage and we start looking at licenses to sort bronze from hybrid. I think they rank copies on publisher websites higher when determining the `best_oa_location`, so in your example above, a Green repository version and a Bronze version would return the Bronze.

You can actually see in their source code how the algorithm is set up: https://github.com/ourresearch/openalex-guts/blob/8b0e87d0589abb75e03b66dbc7e6b6a00896688f/models/work.py#L202

Re: embargoes, yes that’s certainly possible, and always a reason for caution when interpreting current (+/- 1) years. Each day, more articles may come out from behind a one-year embargo and switch from Closed to Green.

Thanks for your comments David Crotty. Good discussion.

“Do we assume that eLife is greedy and seeking to make huge profits because their APC ($3000) is higher than that of Springer Nature’s Scientific Reports ($2850)?”

I think my original suggestion [I thought APCs were so high because the publisher wants to make huge profits] would be better stated as [I thought APCs were so high because the publisher wants to make as much income as they can]. In terms of motivation for income, this is a priority for most publishers, whether commercial or non-profit.

“The market is not fully rational. My apocryphal understanding is that the initial PLOS APC of $3000 was largely pulled out of thin air, and that seems to have set some sort of anchor for the market over the years.”

Except that APCs vary enormously, from zero to several hundred pounds.

I would see the market as being quite rational. Publishers will charge what they think they can get. They would be levelling their APCs to generate significant income while keeping them attractive to researchers. The more prestigious the journal, the more desirable they are and th higher they are able to charge.

High APCs are also a way to dissuade some of the 90% of researchers you referred to whose papers would be rejected, thereby saving time for processing of such papers.

With regard to AI, I look forward to seeing any research that looks into whether and how AI can be used to screen submissions. I think AI might prove to be faster, cheaper and more effective than human editor assessment for some tasks.

Leave a Comment