Publishers have been the whipping post for those who feel that reports deriving from taxpayer-funded research should be made available free of charge to taxpayers. This has occurred despite the fact that there are alternative ways to get the directly funded research results to the taxpaying public. In most cases, these alternatives depend on the researchers themselves fulfilling the terms of their grants by filing reports and data with their funding agencies, which then make the reports available online via government Web sites. They also depend on the funding agency enforcing its own policies.

But what happens when studies reveal that NIH-funded researchers aren’t depositing their reports or their data within the time allotted to them? What happens when the NIH itself doesn’t chase down the reports it requires from its taxpayer-funded researchers?

Very little, it seems, despite the fact that on its face, this seems like a pretty egregious abrogation of duties. As H.R. 3850, Food and Drug Administration Amendments Act of 2007 (FDAAA) is explained on the site:

‘Applicable clinical trials’ generally include interventional studies (with one or more arms) of drugs, biological products, or devices that are subject to FDA regulation, meaning that the trial has one or more sites in the U.S, involves a drug, biologic, or device that is manufactured in the US (or its territories), or is conducted under an investigational new drug application (IND) or investigational device exemption (IDE).

There are at least three studies showing low rates of compliance — two of these (one focusing on publication after registration in, and the other focusing on mandatory reporting requirements for the same) were covered recently here. A third study of publication events, published in PLoS Medicine in 2009, was highlighted in our comments on an earlier post covering the first two studies.

After a week of spirited comments about open access, taxpayer-funded research, heartless publishers, and so forth, it was surprising to hear the virtual equivalent of crickets on a soft summer night when we published a post summarizing these findings.

Maybe Rick Anderson is exactly right — we only hear comments at the pitch and rate we’ve seen in some instances when we argue about “should” and not “is.” In that mode, and to get the conversation going, this is more a post about “should” — that is, researchers should keep their promises to the taxpayers funding their research by actually sharing the data, writing up the reports required of them, and depositing both with their funding agencies when the terms of their funding require such actions.

The ICMJE is supportive of the FDAAA and system:

. . . the ICMJE will not consider results data posted in the tabular format required by to be prior publication.

Many publishers also support authors in this regard, including the oft-maligned Elsevier, which will help authors deposit their materials.

Despite this and the mandatory nature of the reporting requirements, not only were publication rates of trials registered at surprisingly low, but perhaps most alarming was the low rate of compliance with mandatory reporting requirements.

Not all trials registered with are subject to mandatory reporting requirements, and the researchers performing the recent study of reports being filed limited their scope to those grants that required their recipients to report their findings within 12 months of trial completion.

Of the 738 trials that were interventional (that is, patients were put at risk) and which required reporting within 12 months, only 22% had reported results. A full 78% had not. By comparison, trials not required to report their findings still reported them 10% of the time, so there’s only a 12 percentage point gain by having mandatory reporting requirements in place and agreed to upon trial inception.

Let’s make this clear — these are researchers who applied for and received taxpayer funds on the condition that they report the results of the trials that were funded within 12 months of the trial ending. These were also interventional studies, meaning patients were put at risk.

Why isn’t there a hue and cry about this abrogation of duties to taxpayers and patients alike? Where are the police when you need them?

As for the lack of NIH publishing practices around the required reports, it’s interesting to contemplate that while perhaps they can’t manage to enforce their policies with researchers, they can spend time building out the likes of PubMed Central, while also putting in place unfunded mandates on publishers, who have to share their online infrastructure without further compensation. Some may call this crafty, but it seems like more of a misappropriation of effort and influence. Shouldn’t the NIH be worried about directly fulfilling its duty to taxpayers, enforcing its mandates with researchers, and doing justice to the patients put at risk in the interventional studies it has funded?

But where is the outcry? The shock? The dismay? The calls for justice?

I think the reason for radio silence on these gaps in reporting is sad and simple — the most vocal critics of what happens to government-funded research have fixated on publishers for more than a decade. It’s a hard habit to break. Publishers are an easy target because they are perceived as “the other” in the scholarly and research world (despite the fact that most publishers are run by academic societies, universities, or academic researchers); because publishers have consolidated operations and well-known names (Elsevier, Wolters-Kluwer, Springer); and because publishers have a relatively small and coordinated set of direct payers.

Researchers, on the other hand, are a diffuse group with diffuse funding. As a US taxpayer, it’s hard for me to be upset at anyone in particular because NIH or DOE researchers are pretty anonymous, don’t function as a corporation, and are hard to tag with individual blame. Also, how much of my taxes are being misspent because of this? Probably a small amount, but definitely an unknown amount. Again, unless this becomes a bigger deal, I can only be exasperated alone and in the abstract.

That doesn’t make the fact that researchers are apparently taking money from taxpayers and then breaking their promises to taxpayers any more acceptable. If one concern at the heart of the open access (OA) movement is that taxpayers deserve to get what they’ve paid for, then a major problem is upstream of publishers — it’s in the 78% of grant agreements that are being abrogated by researchers every year, researchers who accepted taxpayer dollars, completed the research, and then blew off their reporting requirements. Sorry, their mandatory reporting requirements.

Where are the cries of outrage at “researchers keeping data under lock and key” or “researchers breaking their fundamental bargain with the system”?

Until we know what we’re trying to do in clearer terms, we’ll continue to have anger, dismay, and moral outrage erupting predictably, but perhaps not when and where it should.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


24 Thoughts on "The Missing Outcry — Are the NIH and Its Researchers Shirking Their Obligations?"

Thank you, Kent, for raising this question in this forum. It has been raised in other forums and met with deafening silence or the verbal analogue to a dismissive wave of the hand.

I would urge, however, that we (publishers, librarians, academics, government officials et al.) quickly confirm the accuracy of the sighting and size of the iceberg below the waterline, acknowledge this “should” problem as obviously larger than the one for which publishers have been blamed, and urgently turn our attention to PRODUCTIVE ways to address the problem of unreported data and analysis.

Productive could mean solutions that offer commercial opportunities to current and future publishers to “curate” that unreported data. It could mean that curating previously unreported data makes the value that publishers add to published data and analysis even more valuable. This could be a major opportunity for publishing, academe and government to align interests and advance the knowledge economy.

In other words, let’s figure out solutions to address this larger “should” problem and create an “is” result that benefits as many players in the supply chain (which includes the public) as possible.


I think this is a very good point (and has been pointed out before over the years but perhaps not in such a focused manner as you do in your post). Many or most scientific publishers support self-archiving and provide services to do it for authors in many cases. Yet this does not seem to be widely appreciated, or even acknowledged in some of the more insulting/emotive ranty posts one reads on the subject.

The focus of this article is on low reporting compliance of clinical trials funded by the NIH and it gives the impression that the same level of compliance may apply to all NIH funded research. But there is no data to support such a conclusion.

NIH funding for clinical trials is extremely difficult to secure because clinical trials are very complex, include a large number of physician investigators often at multiple sites, and because they involve human subjects. So the number of NIH-funded clinical trials is a small portion of all NIH funded research projects. The largest portion of NIH funded research is basic science research mostly conducted by non-physician doctoral investigators and, if they data were examined, I suggest their compliance rate for reporting (usually annual progress reports and final project reports) would very different than is suggested here. Having worked with basic scientists, I suggest their NIH reporting compliance rate is 80% or greater, but that needs to be substantiated.

It was a small coterie of librarians who successfully pursued enactment of the NIH Public Access Policy which requires the final peer-reviewed manuscript of NIH-funded research to be deposited into PubMed Central. This small group marshalled the outrage of many librarians because they focused on published journal articles, clearly a “library” concern. Is that same group of librarians willing, able, or even motivated to address the compliance with NIH reporting requirements for clinical trials? Is this a “library” concern, or moreso a taxpayer issue that needs broader attention? Maybe this is an issue where librarians can take a lead.

I would like to emphasize the fundamental point you’re making about compliance for non-medical research, which is essentially, “We hope it’s high, but we don’t know.” I don’t think that’s satisfactory.

That said, the bar is higher for clinical research, especially interventional studies that put people at risk.

You’re correct — who is going to get perturbed enough about this to drive change? I’d suggest that whoever embraces this or lets it slide, such action or inaction will indicate their true motivations.

As an active researcher, let me just point out that I cannot apply for new funding from my national science foundation unless I have filed all required reports on my previous grants. My U.S. colleagues in basic science tell me the same holds true for NIH and NSF. So with all due respect, this post seems to be based on a misconception.

How is this enforced Mike? There is no check box on the standard application form certifying that all previously required reporting has been done. There is no central tracking of compliance or any other compliance mechanism that I know of. My observation is that program officers pay relatively little attention to final reports and the informal PO network is the only compliance mechanism.

However, it is important that comprehensive compliance monitoring would be difficult and expensive. This appears to be just another case of writing nice sounding rules without funding enforcement. Back when I taught regulatory design I stressed that rules on paper were not the regulator’s product; the product was behavior. The Government is full of unenforced rules which merely create hassles and the potential for selective enforcement, which means going after people you don’t like.

If an agency does not want to, or can’t, pay for enforcing a rule then the rule should not be there.

I don’t know how this is done in the NIH. In my country, the last payment on an active grant is held by the funding agency until final reports are filed AND APPROVED by the program officer responsible for the grant. The same applies for European Union grants, with the added complication that those are large collaborative grants, so your final payment may be held hostage until submission of an overdue report from a partner in another country! This creates a strong incentive for the researchers and institutions to provide reports in a timely manner. Moreover, as noted above, if I subsequently apply for a new grant from my national funding agency and it is approved, funding is not transferred until final reports have been approved on the previous grant.
What the funding agency does with the reports afterwards is another issue altogether – as far as I know they are filed away never to see the light of day again… .

That certainly works but it is elaborate and expensive. Over here research admin money is very tight, in favor of research money. I don’t know how NIH does it, or even if all the Institutes use a common system. But I have seen the opposite extreme, which is a system where the grant money is simply deposited in a Treajsury account up front, to be drawn down by the grantee as used, on the honor system. There is no linkage to the report system whatever.

You must be joking. The administrative costs of our national science foundation are a minimal fraction of their research funding, and the overheads allowed to the institutions (which cover administration and infrastructure costs) are limited to 15-20%, depending on the type of grant. This is very low compared to the overheads NIH pays some American institutions. If the mighty USA needs lessons in administrative efficiency from my tiny country, something is very wrong…

Funding agency admin budgets have nothing to do with grant overhead rates. The system I describe is much cheaper than yours, but efficiency is another matter, unmeasureable I think.

The other issue that has not received much attention is why peer review is considered so important for the value of this research for the general public. Librarians have indeed used the rhetoric of not serving the taxpaying public in defending the NIH policy, but that argument depends upon an assumption that peer review is crucial for this purpose. But is it? Peer review exists because academe requires a system of vetting to satisfy its needs for assessment of the quality of research conducted by faculty. The primary question that peer review answers is how much of a contribution an article makes to the advancement of research in the field. But is that a question the public even wants to have answered? What it needs is some assurance that the research has been carried out responsibly and the results it reports are accurate and reliable. This is the kind of “light peer review” assessment that PLoS One is now providing. Maybe that kind of review could be made part of the process for validating any final report of government-sponsored research that is posted to the Net: a government “good housekeeping” seal of approval. I find it ironic that some of the same people who are lambasting publishers about RWA and supporting FRPPA are complaining elsewhere about how inadequate peer review is, and how we should all be moving in the direction of post-publication crowd peer review. It seems to me that if you accept the soundness of the latter argument, you should abandon arguing for the NIH approach as the best solution to the problem of access to government-funded research.

Please allow me to be more blunt. Peer review exists to try and filter out crap from the system. If anybody doesn’t understand the kind of damage that can arise from publication of poorly conducted research, just look up the vaccines and autism saga. Peer review is not perfect in blocking this kind of damage, but it is the best mechanism we have to date, and it is critical to have a fair and objective peer review process in place BEFORE publication. That is probably the principal problem with mandating open access to none-peer-reviewed grant reports, and also the reason why some funded research never gets published. Also, please drop the fallacy that PLoS One peer review is not a burden for reviewers and authors, there is nothing “light” about verifying that results are “accurate and reliable”. Required such peer review for all research reports before they are ready for full-fledged publication will bring the system to a grinding halt.

My argument is that peer review is used to rank results via the tiered journal system. Such a ranking is needed by the community for internal purposes, but it is not needed to provide OA so simply making the unreviewed reports available meets the OA need. Then too, the proposals are peer reviewed, so my principle is that if it is worth funding then the results are worth sharing. In fact if silly stuff or bad work is being funded we need to know that too.

As an acquisitions editor for nearly 45 years and former director of a university press that published a dozen journals, I’m quite familiar with peer review. My point was that the kind of peer review used by academe for its purposes is not needed for most of the purposes members of the general public have in accessing government-funded research. As David points out, there already is peer review up front that filters out research projects unworthy of support, and crowd peer review can probably take care of identifying major flaws in research once posted. That kind of crowd peer review could include, for example, a person asking her doctor if an article about a new method of treatment for a malady from which she suffers is accurate and reliable.

I beg to differ. Crowd post-publication “peer review” is almost as dangerous and useless as mob justice. Once the media pick up on a sensational claim (e.g. “vaccinations cause autism!!!”) and some unscrupulous individuals spin such stories to their own agendas, the genie is out of the bottle and no amount of reasoned argument will prevail before significant and real damage is done. If you are not familiar with the vaccines/autism pseudoscience connection, just look at the ‘wisdom of the crowds’ evident in the debates on climate change, or in the recent Elsevier boycott storm…

There are crowds and then there are other crowds, like those that do peer review of articles posted on arXiv. I wasn’t suggesting that the “wisdom of the masses” should be the touchstone. Some crowds are relevant, others are not. In the case of articles about new medical findings, the relevant crowds are people’s own physicians. A lot of the questions peer reviewers are asked to address by scholarly publishers simply have no meaning or interest for the general public. Often the central question asked is how important a contribution this is to scholarship in the field. Peer reviewers are not even required in many publishers’ guidelines for readers’ reports to point out specific errors, and they are not usually asked to evaluate the utility of the research for practical applications, which is what the general public is most interested in. My point, again, is that there is a mismatch between the needs of the public and the needs of scholarly publishers, and peer review mainly serves to satisfy the latter, not the former.

You make a fair point, though it would be fairer if you acknowledged that the main issues with under reporting and other fiorms of bias arise from commercially funded research. And I would argue that the push for open access is a necessary step towards full reporting – so your defence of the principles behind RWA is problematic in respect of genuiinely addressing under reporting. I’m glad to see Elsevier’s retraction – but there’s a long way to go before the overall system of reporting the findings of research is improved.

The discussion has been around taxpayer-funded research, not commercially funded research. Interestingly, if you read many of the studies about commercial research carefully, you find that data presentation isn’t usually biased, but conclusions are portrayed a little too positively. A recent study (a pre-print released by Wiley) suggests that in some fields (this one was in rheumatoid arthritis), there isn’t commercial bias — in fact, there was less bias in commercial studies. So, if we’re being fair, we also have to be accurate, both by portraying the actual facts and also by staying on-topic.

As for the RWA, that had little chance of affecting access to taxpayer-funded research reports (which are not the same as published articles). After all, it wouldn’t have changed any obligations for mandatory reporting. If reporting were occurring with a high level of compliance, the NIH and other Federal agencies would have large repositories of posted research reports on their sites. RWA wouldn’t have changed the policies of or the DOE or any other agency.

RWA has nothing to do with contract reporting. RWA prohibits mandatory posting of subscription journal articles.

Fair comments. This issue should get more traction, and researchers absolutely should fulfill the terms and conditions of their funding agreements!

Who Needs Open Access To What? And Why?

(1) Researchers need access to the refereed research, not a funder research report.

(2) In the online era, there is no longer any reason to restrict access to refereed research only to those researchers who (or whose institutions) can afford to subscribe to (or pay-to-view) the journal in which the refereed research was published.

(…that is all/Ye know on earth, and all ye need to know: [apologies to JK])

What is definitely true, however, is that the fault does not lie with publishers but with researchers (and their lazy fingers) — otherwise we would not need deposit mandates.

Stevan, I disagree with both your blanket assertions, but since you give no reasons neither shall I.

Comments are closed.