It’s been hard to turn to email, chats, Twitter, or any of my other informational feeds since Thursday and not hear an analysis or opinion about the OSTP Policy Memo (a.k.a. the Nelson Memo) that came out last week. The subject? Ensuring Free, Immediate, and Equitable Access to Federally Funded Research. Now that’s going to get attention!

While the language is soft in places, using words like “recommends” and “should” and perpetuating the term “public access”, the memo is a push to open — and its intention is clear.

So we quickly asked the Chefs: What are your initial thoughts about the OSTP policy announcement?

(The Chefs had a lot to say about OSTP so this is Part I of a two-part post.)

View of the White House with the Washington Monument in the background

Tim Vines: Being an Open Data enthusiast, my initial thought is ‘Wooohooo!’. I’ve been telling anyone who’ll listen that governments will one day mandate sharing of data produced by their researchers and are very likely to push responsibility for enforcement onto journals. And that day has just drawn a lot closer.

For comparison, the Holdren Memo (section 4) is very general about data (“digitally formatted scientific data resulting from unclassified research supported wholly or in part by Federal funding should be stored and publicly accessible”), whereas the Nelson memo (section 3.b.i) goes directly to:

“Scientific data underlying peer-reviewed scholarly publications resulting from federally funded research should be made freely available and publicly accessible by default at the time of publication”

This approach makes a lot of sense: only journals have consistent access to manuscripts that can still be changed to incorporate complete data sharing statements. Journals also have a clear ‘moment of attention’ (do ‘X’ or we won’t publish this manuscript) when authors can be compelled to comply with a funder data sharing policy.

A potential workflow would see journals pass ‘in review’ articles to a third party to assess compliance with the relevant funder policies, and the authors told what they need to do to comply. Having article-level compliance monitoring also sidesteps a major point of contention for these policies: defining what datasets should be shared. We could (and let’s face it, probably will) labor for years to draw up detailed guidance for every conceivable type of data and study, and still researchers would complain that the advice didn’t apply to their particular situation. With article-level compliance monitoring one can just formulate a broad policy and use the article-level report to tell authors exactly which data (and other outputs) they need to share to comply with the policy; everyone – funders, journals, researchers – is then clear about what needs to be done.

…the Nelson memo doesn’t mention additional resources being made available to support the new approach to OA articles and data. Nonetheless, it makes clear sense for funders to cover the costs of promoting compliance with their policies.

As the excellent Clarke & Esposito summary notes, the Nelson memo doesn’t mention additional resources being made available to support the new approach to publicly accessible articles and data. Nonetheless, it makes clear sense for funders to cover the costs of promoting compliance with their policies. For example, the National Institute of Neurological Disorders and Stroke had a 2021 budget of $2.7 billion, which generated on the order of 10,000 published articles. That’s probably all that the money generated: most of the datasets, code objects, protocols, and new lab materials associated with the articles are still hidden away on lab computers. Getting all of those outputs onto public servers and linked into the scholarly infrastructure to maximize their discoverability seems mission-critical for funding agencies and would be possible for less than the 5% of budget that Barend Mons recommends that funders put aside to promote open data. (Interestingly, the Economic analysis that accompanies the Nelson memo also cites this Mons article).

Another bit of good news is that non-government funders have made a lot of progress on promoting open science with article level monitoring already, with Aligning Science Across Parkinson’s recently released Blueprint for Collaborative Open Science being a particularly good example.

One other thought: the Memorandum also states that the OSTP will remain neutral with respect to business models, but I think they could achieve a lot by trying to move away from APCs.

APCs heap the costs of handling and reviewing all submitted articles onto the authors whose articles get accepted, while authors whose articles are reviewed and rejected pay nothing. Authors from countries with high acceptance rates (e.g. the US) thus pay for the peer review of articles from countries with lower acceptance rates. Chinese research output has grown spectacularly over the last few decades but the quality remains very variable. A substantial fraction of APCs paid by the US taxpayer thus goes toward reviewing (and rejecting) Chinese articles; it seems high time for the Chinese funding agencies to pick up these costs themselves.

The OSTP could instead recommend that US funding agencies replace APCs with the submission + publication fee model, which ensures that they pay only for the peer review and publication of articles with US based authors.

The OSTP could instead recommend that US funding agencies replace APCs with the submission + publication fee model, which ensures that they pay only for the peer review and publication of articles with US based authors. Journal revenues would remain the same, but the publication costs for the US taxpayer would probably halve, if not more (particularly since US researchers tend to publish in high impact journals where the ratio of APCs to submission + publication fees is highest).

Robert Harington: I was taken by surprise – perhaps I should not have been – but surprised I am.

The first words that formed in my brain were “Well I never…”, followed by a classic quote from Monty Python’s Flying Circus:

“We interrupt this program to annoy you and make things generally more irritating.”

It is almost as if there was a decision to kick the hornets’ nest and just see what happens.  In this day and age, who needs to consult with stakeholders to see what sustainable open policies should look like?

What’s done is clearly done. However, I can’t stop myself ruminating over how startling it is that a policy mandate like this was crafted without trying to engage with the hard questions it raises, with an apparent lack of understanding of discipline culture, the role of stakeholders such as scholarly societies, and the effects on research itself.

There is no question that all stakeholders in the publishing ecosystem — researchers, funders, institutions, libraries, and scholarly societies — understand that a move to equitable and inclusive open research and open access publishing is necessary.

There is no question that all stakeholders in the publishing ecosystem — researchers, funders, institutions, libraries, and scholarly societies — understand that a move to equitable and inclusive open research and open access publishing is necessary. What is not so clear is how this may be done in a sustainable way. This latest mandate to US Federal funding agencies to develop new (or update existing) public access plans is laudable, but leaves out how this will be paid for. It also pays no heed to scholarly societies who rely on publishing revenues, but, depending on the field of endeavor, may have communities that are unable to pay an APC. There is scant mention in the OSTP memorandum of costs:

“In consultation with OMB, federal agencies should allow researchers to include reasonable publication costs and costs associated with submission, curation, management of data, and special handling instructions as allowable expenses in all research budgets.”

Does this mean that funders are committed to providing more money in their grants to cover the costs of publication? There is no mention of increased levels of funding. In some communities, such as mathematics, funding is relatively low, and what monies there are go to supporting the research itself rather than article processing charges (APCs). Nor is there mention of the suppressive effect that the need to pay for each article will have on authors’ willingness to publish – particularly junior researchers who might otherwise usefully exploit the data produced by research with spin-off work, but who might have limited access to monies to cover the APC.

The American Mathematical Society (AMS) is already experimenting with open access models. Our Green OA policies apply to peer reviewed author final manuscripts. Our Gold options are free to AMS members – little take-up among non-AMS members. Our new flagship journal, Communications of the AMS, is Diamond open access – free to publish in, and free to read, with Creative Commons reuse licenses attached. We are members of CHORUS, and as such comply with existing Federal funder mandates.

It is too early to say how this will play out of course. I do worry that for mathematicians in particular, an unfunded mandate of this sort may place undue burden on researchers.

David Crotty: Publishers who have been paying attention generally assumed that immediate public access to papers and open data requirements were in the pipeline, and that it was a matter of when rather than if. With these policies released, timelines are set, and planning that was hopefully in place can be accelerated.

It is noteworthy that this is a set of policies written with seemingly little input from publishers nor the same level of consultation with key stakeholders as was undertaken for the Holdren Memo. The fact that this is now the second consecutive US administration that chose to craft research publishing policy in this manner speaks volumes about the poor relationships between the research publishing industry (particularly the industry’s advocacy organizations) and the federal government.

The financial impact report sent by the OSTP to Congress includes some significant leaps in logic due to inadequate information. There are several acknowledgments in the report about the lack of available data that a more thorough research process should have surfaced (e.g., actual publisher data on what it costs to publish a paper from EMBO, or from PLOS, as just two of many examples). Had this administration’s policymakers reached out more broadly to publishers, necessary information would have been readily available and more accurate projections could have been made without relying on questionable sources. The OSTP cites a “study” to back up a point about the cost to fund “long-term management of public access to research results and data,” which is actually not a study at all, but rather an editorial opinion piece. In one case, an out-of-date quote in a news article, from before the launch of illuminative library tools like Unsub and COUNTER 5, is used to claim that libraries won’t cancel subscriptions due to the availability of free materials. We know that these tools are being promoted to libraries for this purpose, and the long list of subscription cancellations available from SPARC suggests they are effectively being used.

The Nelson Memo outlines an ambitious policy agenda. Careful planning, and a clear-eyed understanding of likely secondary effects and unintended consequences will be required if it is to succeed.

There is also no analysis of the economic impact of the open data policy in the document sent to Congress. Open data requirements are a much bigger undertaking (with potentially greater upside) than policies toward public access to research papers. To be effective, such policies will require extensive infrastructure, ongoing maintenance and improvements, data-type and associated metadata standards, and significant cultural change within the scientific community. The Nelson Memo calls for agencies to develop plans to eventually archive and make public all data resulting from federal funding, rather than just the tip of the iceberg of data associated with published papers. A bio-imaging lab or an astronomy research group can churn out terabytes of data every single day. Organizing and perpetually storing those data to meet FAIR principles is not a task to be underestimated. I note that above my colleague Tim Vines, using the OSTP’s favored 5% figure, suggests that it would cost $135M per year to make available the data from one sub-agency of just one of the many agencies under these requirements, which then runs out to $8.25B annually for the 2021 federal research budget of $165B.

If even close to accurate, that’s certainly not a figure to gloss over when considering economic impacts and vastly higher than the likely costs to be seen for public access to papers. As one of the authors of a Day One Project proposal for a federal open data policy, I can vouch for the importance and value of open data, but also the complexity involved and the unlikelihood of success with no additional funding and an abbreviated timeline.

The Nelson Memo outlines an ambitious policy agenda. Careful planning, and a clear-eyed understanding of likely secondary effects and unintended consequences will be required if it is to succeed. As with the Holdren Memo before it, policy documents like this are deliberately vague – they offer a set of requirements and the actual details and implementation are pushed to the funding agencies to figure out. The opportunity to bring some real-world data and an understanding of how best to proceed will hopefully happen as the agencies put together their plans.

Tao Tao: Since this federal policy guideline was announced, everyone is talking about its impact, which obviously will not be confined to the United States. One cannot help but wonder how the other governments will react. For example, will the Chinese government announce a similar policy soon? In my opinion, that won’t happen. For a country that publishes as heavily as it reads, the government’s approach toward open access is continuing to develop.

Only a few years ago, the attitude toward open access was still ambiguous. When a Chinese committee announced, at a publishing meeting in Berlin, full support of Plan S, it was the first official voice from a major Chinese funder to support the bold EU open assess policy. Even then, I heard Chinese STM publishers saying that the OA advocates are librarians, and that they don’t represent us. Back then there were also many popular posts describing open access journals as predatory journals.

But in the last few years, the attitude has quietly changed. The criticizing voices have almost disappeared and there are now many webinars discussing open science which are either organized or sponsored by the government. Still, while the government’s attitude may no longer be ambiguous, a nation-wide mandatory OA policy is unlikely to happen soon.

…less than 5% of Chinese papers are published in journals owned by Chinese publishers. This OSTP policy announcement adds urgency for China to develop its STM publishing.

For Chinese authors, while publishing OA may be more accepted and encouraged than before, a journal’s publishing model has never been much of a concern compared to its impact. We can expect that, with this US mandatory OA policy, more publishers will accelerate their transition to open access, and China’s spending on publishing will inevitably increase. There is just one problem: less than 5% of Chinese papers are published in journals owned by Chinese publishers. This OSTP policy announcement adds urgency for China to further develop its own STM publishing ecosystem. The quote “publishing papers in the motherland” has been accepted more than questioned. I will not be surprised if this new policy leads to stronger plans to drive the development of Chinese-owned journals.

Alison Mudditt: My initial reaction? AT LAST!! I don’t mean to dismiss the genuine challenges that the new policy presents for some (the humanities and society publishers, to name two obvious ones) but I’m confident that my fellow Chefs will do more than justice to those. This is a big win, not only for OA advocates but also for equitable access, trust in research, and scientific progress. For me, open access is at the core of a sustainable, equitable, and inclusive open science future.

This a big win, not only for OA advocates but also for equitable access, trust in research, and scientific progress. For me, open access is at the core of a sustainable, equitable and inclusive open science future.

The most important benefit is free and unfettered access to research. It still matters in the Global North at a time when research is more critical than ever. The number and scale of domestic and global challenges is unprecedented; broad access to trustworthy research is non-negotiable if we’re to address these successfully. The inclusion of data will increase the impact of the policy and facilitate understanding of the reliability and robustness of findings, and also the trustworthiness of science (as in “we’re prepared to share evidence of our claims”). I also hope that this will spur innovation here given the number of challenges involved in making data truly FAIR.

The inclusion of data will increase the impact of the policy and facilitate understanding of the reliability and robustness of findings, and also the trustworthiness of science (as in “we’re prepared to share evidence of our claims”).

And in spite of the progress we’ve made over the past two decades, open access still matters across lower and middle income countries. They are currently bearing the brunt of problems created by the Global North – whether the climate crisis, environmental degradation, or the war in Ukraine. Meeting these challenges starts with open and immediate access to the research literature.

My main quibble: I do wish that there was mention of business models beyond endorsement of the green route. Instead, the mechanism by which public access is to be achieved if left to the individual agencies. There is a not insignificant risk of a default to gold and further entrenchment of the big commercial players. But for now, I’m letting my inner optimist shine through and believing that the new policy will spur long overdue innovation in this arena (especially given that the policy focuses on the importance of equity in implementation).


The Chefs had so much to say that we are publishing the remaining responses tomorrow. In the meantime, please join the discussion in the comments!

What are YOUR initial thoughts about the OSTP policy announcement?

Ann Michael

Ann Michael

Ann Michael is Chief Transformation Officer at AIP Publishing, leading the Data & Analytics, Product Innovation, Strategic Alignment Office, and Product Development and Operations teams. She also serves as Board Chair of Delta Think, a consultancy focused on strategy and innovation in scholarly communications. Throughout her career she has gained broad exposure to society and commercial scholarly publishers, librarians and library consortia, funders, and researchers. As an ardent believer in data informed decision-making, Ann was instrumental in the 2017 launch of the Delta Think Open Access Data & Analytics Tool, which tracks and assesses the impact of open access uptake and policies on the scholarly communications ecosystem. Additionally, Ann has served as Chief Digital Officer at PLOS, charged with driving execution and operations as well as their overall digital and supporting data strategy.

Tim Vines

Tim Vines

Tim Vines is the Founder and Project Lead on DataSeer, an AI-based tool that helps authors, journals and other stakeholders with sharing research data. He's also a consultant with Origin Editorial, where he advises journals and publishers on peer review. Prior to that he founded Axios Review, an independent peer review company that helped authors find journals that wanted their paper. He was the Managing Editor for the journal Molecular Ecology for eight years, where he led their adoption of data sharing and numerous other initiatives. He has also published research papers on peer review, data sharing, and reproducibility (including one that was covered by Vanity Fair). He has a PhD in evolutionary ecology from the University of Edinburgh and now lives in Vancouver, Canada.

Robert Harington

Robert Harington

Robert Harington is Chief Publishing Officer at the American Mathematical Society (AMS). Robert has the overall responsibility for publishing at the AMS, including books, journals and electronic products.

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Alison Mudditt

Alison Mudditt

Alison Mudditt joined PLOS as CEO in 2017, having previously served as Director of the University of California Press and Executive Vice President at SAGE Publications. Her 30 years in publishing also include leadership positions at Blackwell and Taylor & Francis. Alison also serves on the Board of Directors of SSP and the Center for Open Science.

Discussion

15 Thoughts on "Ask The Chefs: OSTP Policy Part I"

“In this day and age, who needs to consult with stakeholders to see what sustainable open policies should look like?”

As if there weren’t such consultations with stakeholders.

Can you point to specifically when those meetings took place with this administration and who those stakeholders were? I haven’t seen any public release of such records. Thanks.

While I do feel that more consultation would have been beneficial – both for clarifying the realities and, quite frankly, supporting a change management process/approach, it isn’t completely fair to say there was no consultation at all – thereby discounting the meetings that were held on this subject with publishers during the previous administration. I attended OSTP meetings and there was rich dialogue about the ramifications of various policies. I’m not saying they were enough, but they did occur.

If we want to be angry or grieve, that’s fair. But honestly, we need to move forward – attempt to engage now and understand what our options are – or if there are actually opportunities present as a result of this move.

Understood, though I might differ on whether consultation with a previous administration on a different policy should be considered as an effort made by the current administration on this new policy, and I might add that those consultations with the last administration took place long after the policy was written and had gone through several rounds of review with the funding agencies.

Your point, however, is a good one, and looking backward should only matter in terms of what is done differently going forward. The lesson here is that publishers do not put enough effort toward engaging with policy makers, and only seem to find the time (and money) when reacting to a perceived crisis. A smarter approach would be to invest in build better relationships and offer better support as policies are being crafted, earning a seat at the table rather than being deliberately excluded.

It strikes me that consultation is most worthwhile when you’re not quite sure what the party you want to consult with might say. If you know that e.g. AAP are going to be vehemently opposed to an upcoming policy change, then there’s not much to be gained from asking them. You’re just going to get an earful and give them the chance to mobilize against you.

If, on the other hand, the party is likely to give useful & interesting feedback, or surprises you by being supportive, then there’s a lot to be gained from consultation.

Yes, exactly. Publishers have a reputation for being “the party of NO”. We need to find ways to be more constructive and to bring something positive to the table rather than just trying to block anything from happening.

To clarify one of my points that David picked up on: I said that getting all of the data and other outputs associated with the 10K NINDS articles into the public “would be possible for *less* than the 5% of budget” that Barend Mons recommends. In fact, it’s likely much less. We (DataSeer) are doing this exact process right now for articles published by Aligning Science Across Parkinson’s grantees. Scaling this process up to cover all of the NINDS articles would come in at around $10M, which is 0.036% of the NINDS budget.

There will of course be other costs, but we should bear in mind that stakeholders been building data sharing infrastructure for the last decade, so a lot of the money that David worries isn’t available has, in fact, already been spent.

The above doesn’t address datasets that never become associated with research articles and remain in the lab. For the moment these are in the category of ‘known unknowns’: we know labs have these data, but how much of it is there? Do 90% of datasets end up published, or just 10%? How much is of shareable quality? There’s a lot of groundwork to be done to answer these questions.

Nonetheless, the regular mantra of open science proponents still applies: the perfect is the enemy of the good. Getting all of the data and other objects associated with government funded articles into the public sphere would still be a huge achievement, and it’s not nearly as far out of reach as it seems.

I would argue that not all data is the same, and costs are going to vary enormously from field to field. It’s not just the costs of getting the data out to the public, there are also the costs of creating implementing new metadata and tagging standards for each data type, maintaining an ever-growing repository of data (forever?), continuously updating those data as new standards and technologies occur, and maintaining back-compatibility for older data in obsolete file formats. Then there are training expenses — effective data availability will require specific training in study design for most researchers.

I’d also question whether any of this will end up being the responsibility of journals and publishers. If a funder requires all data behind a paper to be made publicly available, that is the responsibility of the funded researcher, not their journal. For a journal to do all the checks and validation of accompanying data takes extra effort and incurs further costs. It could be a useful service offered authors, but again, funds will be needed to pay for those services.

Regardless of the specific number attached to those funds, I think we agree that it’s not going to be zero.

I would argue that data are mostly the same: our training data come from 3000 or so open access articles and c. 80% of the datasets are just tabular data that can be shared on one of the generic repositories (e.g. Dryad or Zenodo). The rest are more specialized datasets – chiefly microscope images, next generation sequencing data, mass spec data – but there are already numerous specialist repositories for these. Some fields do produce staggering amounts of data (e.g. astronomy or particle physics), but data management at the institutes involved is already extremely sophisticated. It’s the long tail of small-ish datasets that needs to be brought into the public sphere.

With respect to whose responsibility this will be, I think something like this would be most efficient: systems like Editorial Manager or ScholarOne pass in-review manuscripts to a third party service for compliance checking, and the service tells the authors what they need to do to comply with their funder’s policy. The funder then gets invoiced by the third party. The journal’s involvement is limited to managing metadata and API integrations.

More generally, the message I’m trying to put across is that promoting open data (and other outputs) at scale is possible. Not necessarily easy, but absolutely possible.

I worry that all funding agencies hear from industry experts (e.g. David, above) or from researchers is “it’s so hard! it’s so complex! it’s going to be unbelievably expensive!”, with the results that funders panic and reduce their open science ambitions to mild suggestions and very long timelines. And that helps nobody.

Look at it this way: if funders required all their grantees to wear green socks 24/7, and put together a monitoring mechanism, then all of those grantees would have on green socks. They have to: no green socks, no funding, no job.

Same with open data. Establish a monitoring mechanism (like I outlined above) that makes researchers individually responsible for sharing their outputs, tell your grantees that it’s going to be a requirement in 1 years’ time or they won’t be considered for future funding, and press ‘go’. And just like that, we achieve open science, and with it a great boost to public (and researcher) trust.

Oh, it is absolutely possible. None of what is here, in Plan S, etc. is impossible! Some of it is difficulty tho and no one is served well by pretending the costs and complexities are less than they are either. And … we might also want to be at least a bit attentive at what is lost if those who have no supply chain for green socks just disappear … I do appreciate the share of the DataSeer numbers. I think we can probably all agree that more empirical data would be really useful in modeling out just what are the costs!

I’m 100% in agreement with Lisa above. This is important, this is worth doing. But it’s also worth doing right, rather than failing, creating a hostile environment and resentment among researchers, and having to tear the whole thing down and start again. Going in with a clear understanding of what it takes to do it right is essential, rather than just winging it and hoping for the best.

I do wonder about textual data sets. The kinds of corpora that digital humanists build, focus group and interview transcripts from social scientists, etc. These are also “recorded factual material commonly accepted in the scientific community … to validate and replicate research findings.”

I remember hearing (can’t remember where) that most of the data in Dryad at the time was in the form of Excel spreadsheets, so I suppose some of the complexity will be based on questions of what exactly, is meant by “data”. There’s a big difference in curating and long term storing thousands of 24 hour time lapse movies of embryos and one spreadsheet containing the quantitative data derived from those movies (e.g., cell counts, speed of cell movements). One takes a few kb of space, one takes terabytes, each with very different requirements to make useful for future generations. As you note, labs and institutions have developed practices for data management, but much of that is largely focused internally, rather than the external needs of this policy (archiving your own data for your own use versus archiving data for the use of others), so some adjustments will be needed. Further, it’s unknown if those data storage facilities are adequate for purpose here, or if agencies will start their own repositories, making them redundant.

Re: Publishers’ role, given the consolidation in the market and the big publishers’ strategy of selling packages of services across the entire research workflow, I would suspect that any third party suppliers would quickly be purchased (or outcompeted). Why let someone else do what you can charge for yourself? And again, that’s another invoice, and more money needed.

So if the result of all this is to force gold OA as the dominant business model how many journals that cannot publish enough volume to generate enough revenue will fold and what ripple effect will that have on the societies their revenue helps support? Is everything going to consolidate to one journal per discipline? I’m only half joking. Also journals can pub a wide range of unfunded research – I did a study in cardiology that showed 25% in a leading journal. By forcing one model it could create all sorts of inequities for unfunded research and in fact could suppress such work which would be a real shame.

Comments are closed.