Publishers have taken it upon themselves to ensure that funding information is not just communicated in the acknowledgments of a paper but that there are standardized controls of this information built into an article’s metadata. While I question the need for publishers to make it easier for funding agencies to keep track of their funded works, this framework will be immensely helpful in assuring compliance with public access mandates.
When the OSTP memo was released, I set off on a fact finding mission to see how many papers the American Society of Civil Engineers (ASCE) publishes that result from US Government funding. The path to getting this information was ridiculously difficult.
I received an Excel file with the acknowledgements sections of over 900 recently published papers across all ASCE journal titles. I then read every acknowledgment, parsed out the declared funders, looked up those that were abbreviated so I knew what they were, and also noted the country/state of the funder. After this was done, I grouped the papers by MAJOR funder, meaning that Federal Highway Authority was now noted under U.S. Department of Transportation, etc. This required that I look up a lot of the funders to see which U.S. agency oversees that group.
This process was long, tedious, and fraught with omission and errors but it was the best I had at the time.
In order to efficiently link research dollars to research papers, FundRef was born. After a pilot for about a year, FundRef officially went live on May of 2013 and depends on a taxonomy of funding agencies provided by Elsevier.
I was tickled pink when our online submission system announced that it was now integrated with FundRef. We took a look at it and were a little disappointed in the search/drop down options. Non-preferred terms and common abbreviations were an issue. An author entering DOE, for example, may not see U.S. Department of Energy on the drop down list of 10 terms being shown. Typing in “dept of anything” also leaves a blank. The author has to start typing in the entire non-abbreviated term to get the correct agency.
Okay, so it’s early. The drop down list is sure to improve. All in all, the major agencies that we see most frequently noted on papers are represented, including those outside the U.S.
We turned FundRef on and made it a requirement for all authors upon submission. After a few months, I got my hands on the first report of data using the FundRef plugin and taxonomy.
What a let-down. What I had in front of me was 3,000 rows of non-standardized, unhelpful information. Just within the first 50 papers, I saw 6 different ways to identify the National Natural Science Foundation (NNSF) in China. The bad news here is that my previous review of the acknowledgements showed that authors funded by NNSF were pretty standard in how they declared funding.
We have papers funded by “usm” and “SHRP2”. This is how the funders were identified in the FundRef box. This is because authors are invited to add whatever they like and ignore the drop down list, which they are apparently doing, particularly if they don’t see the agency they are looking for in their first try.
I understand the need for the widget to allow for authors to include funding agencies not on the list already but I question whether there is a better interface for collecting this kind of information. Might there be an option where authors can say that they have not found their agency on the list and then input the information in a separate field for further inquiry? To my knowledge, the only way to add agencies is to send an email and request an agency be listed.
I know we are not the only publishers reporting trouble with rectifying what the authors enter in the FundRef box with what they have typed in the acknowledgement section. This has come up at CHORUS implementation meetings.
Looking at the data provided by CrossRef on the FundRef information page, 57% of the DOIs deposited with funding information have funders that are not in the extensive taxonomy of nearly 8,800 funding agencies and as such, these will not show up in the FundRef search.
FundRef is still in its early stages, for sure. This initiative improves dramatically when publishers actually start signing on and depositing information. CrossRef talks about FundRef in terms of DOIs for funding agencies, except we aren’t there yet. Here is what I would like to see:
- Each and every grant would have a DOI-like identifier. Each agency would be assigned a prefix, just like publishers are with the DOI and other information, including a grant number, will follow.
- Each agency would then deposit to FundRef the grant number with the name and email of the Principle Investigator or grantee.
- Journal submission systems would collect the grant information as prescribed and have a way to check it against the FundRef database, just like we can validate references today. Any anomalies could be queried prior to publication.
- The grant “DOI” would be deposited along with the metadata for each journal article to CrossRef at time of publication.
I know what you are going to say…funding agencies won’t do this. Hogwash! The whole point of this is for funding agencies to show what they got for the money they spent. Publishers are not going to be able to resolve this alone. FundRef is a good start but the funders need to claim a portion of this process if they expect to see the benefit.
The part that really makes me nervous is the dependence on the FundRef tools working in order to participate in CHORUS. Publishers who participate in CHORUS to comply with forthcoming and existing public access policies for federally funded works will need to have FundRef data deposited with their DOI metadata in order for CHORUS to properly identify the funder. For my department, this leaves us with no other option than to use the acknowledgements and have copyeditors/taggers add the correct Funder ID to the XML of the articles. This process will surely lead to more author queries and more corrections in the proofing stage.
All of this is, of course, possible. But boy it would be nice if it were automated in some way.
Building the infrastructure for connecting research grants to published outcomes is not easy and initiatives such as FundRef are critical. If there is a way to make sure that the process improves—for authors, agencies and publishers—we all need to be helping it along.
Public access mandates are coming our way and identifying funding agencies will be critical. Before all of this becomes the law of the land, perhaps we need to get some of the kinks worked out.