According to a recent study, there are now more than 663 funding agency/institutional policies requiring public access to research papers. Last January I wrote about the unexpected consequences of these policies and the administrative nightmare around efforts to keep researchers in compliance. Nature’s recent “Author Insights” survey provides some new evidence of of the scope of the problem.
We are in the midst of an era where funders and institutions are imposing more and more requirements on researchers. Increasingly, researchers have to prove the value they’ve returned on those scarce grants, and both funders and institutions are looking for ways to drive public access to, and impact of, the research that’s being performed. Many of these policies require researchers to publish their results in a particular manner, either under specific access and licensing terms, or more often, to provide public access to some version of each research paper after an embargo period.
Failure to comply will result in a loss of funds — either the agency will hold back the rest of your current grant or you’ll be ineligible for future funds. Institutional policies seem a bit less mandatory, with ways to opt-out and no clear punitive measures stated. Regardless, it’s in a researcher’s best interests to keep these folks happy. But to do so, you have to be aware of what they’re asking you to do.
Nature’s recent survey of some 21,000 authors give a sense of how well funder policies have been communicated (spoiler: not well). Of those surveyed, 25% reported that, “they did not know their funder’s requirements with respect to open access.” Of those that did claim to know their funder’s policy, more than 40% got it wrong. So that means more than half of those surveyed were in the dark when it comes to compliance.
The RCUK has already shown us the enormous cost and effort involved in getting minimal compliance for a small population of researchers for a single policy. Multiply those costs by at least 663, then assume that each paper has multiple authors from multiple institutions (possibly different countries as well) and multiple funding sources. If you are a research administrator or a librarian at a research-intensive institution, you may find yourself beginning to break out in a cold sweat.
As we learned from GI Joe, knowing is half the battle. There is an enormous amount of work that needs to be done to raise researcher awareness of their coming obligations. Cornell University has started a website to provide information on compliance to researchers, although this is limited to just 5 US federal funding agencies and some suggestions about how to track down info from “all other funders”. It’s a good start and I suspect we’ll be seeing more resources like this across the research community.
It’s the other half of the battle though, the actual compliance, where the majority of effort and costs come into play. So far, funding agencies don’t seem to be offering a great deal of financial support for the increased administrative burden, either to schools or to researchers themselves. Compliance is a valuable service that journals could offer to authors, but even for the most technologically sophisticated publisher, running each paper through a complex combinatorial matrix of 663 factors and potential outcomes, followed by sending multiple versions of the paper to multiple repositories under differing terms may be too much to handle, certainly at least without passing on costs to customers.
What’s obviously needed here is automation. Where these policies can agree on standardized terms and require the use of open tools like DOIs, ORCID IDs, and CrossRef’s FundRef service, complexity can be reduced and systems can be built. SHARE’s notification tool can help institutions track their researcher’s obligations, although the onerous task of fulfilling those obligations still remains. A centralized, common system for automated compliance that is built directly into the publication process, such as CHORUS, seems an obvious way to reduce everyone’s time, effort and expense.
No one wants to see researchers losing their vital funds over administrative details. The more we can do about this in advance, both in terms of awareness and lifting that burden, the better.
37 Thoughts on "Researchers Remain Unaware of Funding Agency Access Policies"
This is clearly a regulatory mess, with a high confusion cost due to combinatorial complexity. Imagine how much worse it will become if different disciplines start getting different embargo periods from different funders, yet that is probably where we are headed.
Agreed – and the mess is even worse when the publications – especially journal articles – are completed after the funding period. Both yourself and David Crotty have outlined many issues previously (see http://scholarlykitchen.sspnet.org/2012/01/06/my-argument-for-public-access-to-research-reports/ and http://scholarlykitchen.sspnet.org/2015/08/11/revisiting-is-access-to-the-research-paper-the-same-thing-as-access-to-the-research-results/ ).
An additional set of problem arise where one or more of the authors no longer work at the institution. If they are on contracts tied to the funding, and do not have ongoing employment with the same institution (e.g. through tenure or longer term position), they may have no links to the institution after the funding ends. If journals articles are written after the contracts end (which is apparently common), authors are not only doing it for free, but have probably lost access to institutional supports including OA funding and administrative support.
That is an important point, Emma. Prior to tenure, relocation is the rule, especially for postdocs. I need to factor this into my model. The notion of access seems simple but research, funding, administration and publication is a very complex system.
In Australia we also talk about the “casualisation” of the research workforce – and it’s not just post-docs.
May I add that writing papers after one leaves an institution raises some interesting legally and ethical questions about accessing and storing the raw data, results, analysis etc.
I suspect that cloud services like OneDrive, Google, and Dropbox are hosting extraordinary amounts of data that shouldn’t be there … and are possibly data mining the s*%t out of it.
Ugh – open tools like DOIs, ORCID IDs don’t reduce complexity, they add another layer of bureaucracy one has to navigate. In my field, virtually every (new) paper is open access, and has been for some time, yet I have to spend my time listening to university compliance officers give me presentations on why and how to make papers open-access, which I – and literally every person in the audience – was already doing, and then spend time proving to them I do things I was obviously going to do anyways, and then sorting out their errors. All in the name of reduced complexity.
Sounds like a good case of confusion, Andrew. Why doesn’t the compliance officer know what you folks are doing? Where is the disconnect? Is it because compliance is so new that the CO has no data?
Yes, it was newly set up bit of bureaucracy (they call themselves a service, though I wouldn’t). So, of course, I wouldn’t expect them to know what’s up of the bat. But charging into our department to tell us how we’re going to have to start making everything open access without bothering to look into what we were doing first did not endear them to me – or really, anyone in our deparment. No one wants to sit through a presentation on how and why to do something they already do, not have an additional layer of bureaucracy to check off.
If anyone in your Dept gets federal funding then new requirements are coming, so what you do now is not enough. You will have to submit your manuscripts to the various funding agencies, which will have different systems. Your present format may not be acceptable. Perhaps this is why the CO is there. Automating this complex new process is what David is talking about. That you are already doing green OA is irrelevant.
I can’t necessarily speak to that, but they were only ensuring we were Open Accessing per our funder requirements, and explicitly told us what we were already doing is fine (and I’ve subsequently published a few papers, for which the only difference was wading through the additional, internal bureaucracy).
I disagree. DOIs are automatically appended to papers. ORCID takes a one-time sign-up. FundRef is a simple pull-down menu when one is submitting a paper (certainly less complex than trying to parse the “Funding” section of any given paper). These simple acts all allow for automation of more complex processes like compliance with multiple policies.
Think of something like the CHORUS dashboards (http://dashboard.chorusaccess.org/), which are powered by the above open tools. If your compliance officer had access to a university-specific dashboard, then I suspect he/she wouldn’t hassle you so often.
Okay, so perhaps DOIs ar innocuous, but ORCID adds another layer of bureaucracy with no upside. ORCID takes a one time sign up, and an every time used password reset, and is just red tape that has to be done for some compliance officer or another but doesn’t add any value. I haven’t encountered FundRef, but “Hey, let’s have authors acknowledge their funding sources, then force them to do it again” is, again, just adding a layer of bureaucracy to navigate.
These kind of tools don’t make things easier for authors, or overall. All they do is pass administrative tasks from administrators who do them routinely and understand them to researchers who have to figure out basically from scratch how to do it each time because they encounter them maybe once or twice a year. And so it seems easier for you, because you’re offloading it. But it’s far from automated, because the person actually doing it has to relearn how to do it each time they do it.
My university does use some specific dashbord-like tool (possibly CHORUS, though I can’t find it listed anywhere). But, of course, having to do *anything* is a hassle, because I was already making all my manuscripts (green) open access, just like every single person in my department, so making us do anything else is already unneeded bureaucracy. And, in addition to having to re-learn their system each time I publish a paper, I’ve also been hassled because the compliance officer wasn’t familiar with the format of my preprint (which is pretty standard in my field, though perhaps the 2nd or 3rd most common). The total hassle is perhaps only an hour per paper or something, but it’s one of the thousand bureaucracy cuts one takes, and the add up.
Such is the price of receiving funding/employment. Sometimes we all have to do tedious things that we would rather not do or that seem pointless and annoying. Given the scarcity of research funding and jobs, funders and institutions are increasing the burden they put on researchers. It’s a buyer’s market, and if you won’t jump through the hoops, there are hundreds, if not thousands of qualified PhD holders out there who would love a faculty position.
If you have to reset your ORCID password every time you use it, then you’re doing something wrong. I’ve had an ORCID account for years now and have never once had to reset my password. What’s really useful about tools like this is that there’s all sorts of development going on to automate various processes so you don’t have to spend any time on it–for example, there’s work being done on many journal systems that will automatically add a new published paper to your ORCID page, provided you supply your ORCID ID when you submit the paper. Cutting and pasting in one number to a form you’re already filling out is a lot less work than updating your profile page. Ditto selecting your funding source from a pull-down list rather than writing a paragraph in your paper acknowledging it (and in many cases, having to use specific wording to allow the publisher to pick up things like NIH funding so they can arrange deposit in PMC).
Also, your school is not using a CHORUS dashboard because institutional ones are still under development and not yet available.
But these tools are low-effort, and enable automation of even more time-consuming activities. Better to do small things and let the machines do the heavy lifting. These tedious demands aren’t going away, and it’s likely they’re only going to increase over the course of you career. There are worse problems to have.
Well, of course I have to do it so I do, but that’s no reason not to kvetch about it. I do a lot of bureaucracy things I don’t like, and I kvetch about them all, since it’s all I can do.
But it’s still flawed assumptions that’re making you think it’s less time consuming. I know what I’m doing wrong with ORCID, I’m submitting to journals that require one to submit an ORCID ID number (what one does for their collaborators, eh?). That means I have to go to ORCID, request a password reset email, put in a new password, and copy-paste the number. But I wouldn’t otherwise update my ORCID page, I’d otherwise do nothing. It’s not time-saving, it’s time adding (even if ORCID, say, is only adding ~5 minutes of purposeless work per paper, that’s still five more than zero)
First, why can’t you remember your own password? That hardly seems like a flaw in ORCID to me. Perhaps more importantly, you don’t need to sign in to get someone’s ORCID ID number–just do a search on that person’s name. And as I said, soon you won’t have to update your page, it will do it automatically for you.
Which would be less time-consuming–pasting in your coauthors’ names, affiliations, addresses, phone and email addresses or simply cutting and pasting in their ORCID ID numbers? This is something else in development, systems that will automatically draw that data out for grant applications, paper submissions, etc.
Then rather than having to go to your funding agency’s website and fill out more forms to prove that your papers have been made publicly available, they can automatically pick that up from your ORCID ID and the FundRef data you briefly entered as you submitted the paper. Not to mention getting the right version of the paper with the right embargo settings into multiple repositories. 5 minutes spent is better than spending a full day doing these things by hand.
“Which would be less time-consuming–pasting in your coauthors’ names, affiliations, addresses, phone and email addresses or simply cutting and pasting in their ORCID ID numbers? ”
Is this what’s supposed to happen? I’ve submitted two journal manuscripts recently, both of which asked for my ORCID details. Neither pulled any of my ORCID details into the manuscript submission system – I had to fill in everything manually.
Useful passwords, ones I use regularly, I remember with ease. Passwords to things I access once or twice a year, and which are unimportant, I invariably forget. Perhaps my memory is worse than average, but I suspect not. And going from having to have an account with a journal, and one with a funding agency, to one with a journal, one with a funding agency, and one with ORCID doesn’t make it easier to remember them all and keep ’em straight.
But still, it’s a re-invention of a wheel already being used. When I type my co-authors’ names into journal submission pages, the journal *already* fills out their affiliation, address, and so on. So no, having to track down their ORCID wouldn’t save time, it would add work (and I’d probably find in practice that their ORCID doesn’t exist, or is missing most of this info – checking, two people on ORCID have my exact name, and neither profile has any information beyond our name – it couldn’t be used to avoid having to enter that info).
I think we’re going off into the weeds a bit here, will try to get back on topic.
Yes, it is true that researchers may be asked to endure a few minor inconveniences in order to 1) save themselves time and effort and 2) to save time and effort for others at their institutions and their funding bodies. Getting back to the two services noted in the post above, I can see two scenarios:
You publish a paper and have a funding agency requirement that it be made publicly available after some embargo period. You can:
1) notify your institutional compliance officer upon acceptance/publication. Carefully read the journal’s policies as to what they allow. Carefully read the funding agency’s requirements as to what you are supposed to do. Select the appropriate version of the paper (author’s original version, accepted manuscript, version of record). Determine the appropriate embargo period required by the funding agency and allowed by the journal. Deposit the appropriate version of the paper in an approved repository, setting the appropriate embargo period. Report this deposit to your compliance officer. Report this deposit to your funding agency.
Each paper will likely have multiple funding sources and potentially multiple institutional policy requirements, so you’ll have to repeat these steps multiple time for each paper.
Publish your paper and add your ORCID ID and identify your funding sources during the submission process.
The latter seems like a lot less hassle to me. Behind the scenes, a tool like SHARE will automatically notify your compliance officer and keep them in the loop. Meanwhile, a service like CHORUS will automatically identify your paper, notify your funding agency and take care of making it publicly accessible, all with no further effort to you.
We know that where these policies are left to manual deposit by authors that compliance levels are awful, and that where things are automated, compliance levels are much, much higher. Seems like a win:win to me.
And as for passwords (remember, you don’t need a password to get someone’s ORCID ID), you might consider an overall management strategy. A simple one is to pick one basic password and then add a modifier based on the name of each place that you’re using it, which lets you have a unique password for each site, but one that is easy to remember (see “Password Algorithm” here http://www.makeuseof.com/tag/use-a-password-management-strategy-to-simplify-your-life/).
Yours is a grand vision, David, but we are a long way from anything like this. A lot of interesting work lies ahead. For example, FundRef lists over 8,000 funders so it is not a pull down menu. Serious discovery tools are needed, possibly including artificial intelligence, to make it easy for an author to specify the relevant funders. Similar issues exist for SHARE and CHORUS, because this is a very complex ecosystem. I am thinking it will take 20 years to make this happen. That is no reason not to forge ahead, just a measure of the magnitude of the task.
Correction: the FundRef Registry currently lists over 10,000 funding agencies:
On our systems it is a blank space where one starts typing, and once letters are entered, suggestions are offered in the same manner that Google offers suggestions once one starts typing in a search term. If the funder is not listed in the registry, the author can continue to type the full name in and have that associated with the paper. As these systems continue developing (and as I’ve noted several times, these are still early days), the quantity and quality of information will continue to improve, particularly as funding agencies continue to see their value and continue to work with the folks at CrossRef/FundRef to refine the registry information.
Technically this is an update (thanks!) not a correction because over 10,000 is also over 8,000. In any case it is a grand challenge. It would help if the funders provided data, such as awardee lists. Then the system can present the likely funders given just the name or ORCID. If I had all the award technical synopses I could probably semantically identify the funder from the article abstract. That would be true automation.
Life would be vastly easier if funders kept a public database of grant awards and had some semblance of an organized system for tracking and identifying grant numbers. Unfortunately, they seem to have no interest in doing so.
The funders created this situation so they are responsible for minimizing the burden. In the US at least we have a mechanism for doing so, namely the Paperwork Reduction Act clearance process, which I helped create 2.3 eons ago. I describe it here:
Both NSF and NIST have had public comment periods on their Public Access plans and I have filed comments stressing the need to minimize burden. I hope some publishers have done likewise.
I think, if we envisioned the same outcome, we might agree. But where you write OR, I can only see the actual outcome being AND. OR would depend on my institute, funder(s), journal(s), co-author(s) all being on exactly the same page, which is exceedingly unlikely to actually happen. More likely, each will be using a different “universal” standard. And even if done automatically to start with, someone or something else will get confused, and I’ll have to sort it by hand anyhow.
And, I already know the journals policies for sharing preprints. I already know my funder’s requirements. I’ve already suffered through figuring out my institutional reporting requirement. Replacing a process I know how to work with a new one is inevitably going to chew through whatever time savings are achieved several times over.
Is this pessimistic? Maybe. The specific example of ORCID might’ve brought up a lot of memories of needless bureaucracy for no gain.
Asking organizations to coordinate and do what’s most efficient for all involved may indeed be a pipe dream, but it’s a goal worth pursuing. And while you may know your funder, institution and journal policies at the moment, these are likely to continually evolve over time, which means even more time and effort invested on your part.
Andrew, apparently you are not in the US or if so then not in a STEM field. The new, massive federal Public Access program requirements are still emerging, so one cannot know them at this point. What is emerging varies significantly from agency to agency and is very complex. But my understanding is that new funder requirements are emerging globally as well. It sounds like you are not aware of this movement, which ironically is the topic of this post — unawareness.
Most of the folks I talk to ignore their institutions requirements. They can’t figure out whether they should opt out, or in, or whether there is an embargo issue. Well, it’s not that they “can’t’ figure it out. They don’t bother because there are no consequences. Funders are an entirely different story.
Here’s something interesting. I am reviewing the just released Smithsonian public access plan and they give their scholarly press a major role in compliance. I wonder if any universities do this. See the roles and responsibilities section of the Plan:
It seems to me that somewhere in the contract provided by the funder there is a clause regarding publishing. Thus, the receiver of the grant has to read the contract. The dicey part is when there is more than one funder!
My former employer, a major US federal science agency, required that, if you published in a journal or anywhere else, you had to provide citation info to its monthly “new publications” newsletter (later on line). Sounds simple enough, and it applied to employees who, one thinks, would follow the rule. Not so! Compliance was abysmal and remained so for decades! The list was totally unreliable, woefully incomplete. Maybe it’s something in the scientist mindset that administration is for lesser beings.
Scientists are great at writing papers; In my experience, asking for more always will be problematic. A good compliance system will need to be proactive and work with the journals themselves, bypassing the undependable scientists.
Interestingly in this context, a memorandum was distributed yesterday to all ARL library directors. It was written by the VPs for research from three large universities, and it suggests that universities have an institutional obligation to facilitate faculty compliance with federal public access policies — on the logic that since federal grants are made to the university rather than directly to the faculty researcher, if the faculty member is out of compliance, the university itself is therefore out of compliance. The memo suggests four options for forging “the essential new link” between the institution and the grant recipient. All four of them involve either requiring the faculty member to assign a license to the university, or the university simply asserting a licence in the faculty’s intellectual product.
As far as I can tell, there is no publicly-available copy of this memo online yet, but I’m sure it will appear online shortly and when it does I’ll post a link.
Please do, Rick. While this probably would increase compliance it will also increase the complexity of the US Public Access program, which I track. To my knowledge none of the agency plans contemplates getting journal articles from universities. They are either from authors or publishers (especially via CHORUS). But then many of the agency programs are still in development so adaptation is possible. Many of the contract terms are still being written so the agencies could even decide to impose these requirements on the universities.
On the other hand changing the system in midstream is a major complexity of its own. Echoing something we recently discussed here, the Public Access program is going to get worse before it gets better.
To be clear, I have heard from some agencies that since the relationship is between the agency and the institution/fundee, that third parties cannot be used to fulfill the obligations. In this case it would mean the researcher or their institution has to handle all depositing of articles in the agency’s repository. This is in contrast to PubMed Central, where the majority of its compliance success can be directly attributed to having publishers deposit on behalf of authors. My understanding is that these agencies, having been made aware of the likely impact on compliance, are considering altering their policies in this light.
“Scientists now spend nearly half of their time on administrative tasks…”, according to “Barham BL, Foltz JD, Prager DL (2014) Making time for science. Res Policy 43(1):21–31.”
Indeed. I have done government research projects where the research took less time than the paperwork, when you count writing the lengthy proposal and budget documents. For this reason I typically charge my government clients about twice what I charge my industry clients, the latter accepting a one page proposal and a handshake.
I once did a study for the US Naval Research Lab as to why it took a year for the researchers to by a piece of equipment that they needed in order to advance their research, something they complained bitterly about, with good reason. It turned out that there were just under 100 steps in the procurement workflow, most of which were reviews and approvals by various offices, all mandated by law. Each step took less than a week but it added up to a year.