Fewer than half of NIH funded clinical trials are published within 30 months of trial completion, a recent study published in BMJ reports. A companion paper reveals that nearly four out of five studies fail to adhere to mandatory public data reporting. Together, they shed light on the degree to which publicly funded research fails to reach scientists, clinicians, and the general public.

The first study, Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis,” by Joseph Ross and others, tracks the publication fate of 635 clinical trials reported as having been completed by the end of 2008. The U.S. Food and Drug Administration Amendments Act (FDAAA) of 2007 requires that clinical trials subject to FDA regulation be registered and reported in ClinicalTrials.gov. The International Committee of Medical Journal Editors (ICMJE) also requires trial registration as a requisite for publication in one of its member journals.

Ross reports that less than half (46%) of the completed trials were published in a peer-reviewed journal indexed by Medline within 30 months trial completion. The median time to publication was 51 months (4 and a quarter years) after trial completion. Overall, 68% of the trials were published in a journal indexed by Medline although 32% remained unpublished. For those trials that were published, the median time to publication was nearly two years (23 months) after trial completion.

In addition, trials completed in 2007 or 2008 were more likely to be published within 30 months than those completed before 2007 (54% vs. 36%, respectively) suggesting that timely reporting of results in a peer-reviewed journal appears to be getting better, not worse. The authors, however, are not completely sanguine about their results:

[S]ubstantial amounts of publicly funded research data are not published and available to inform future research and practice.

While the 2008 NIH Public Access policy requires that final, peer reviewed manuscripts be submitted to PubMed Central no more than one year after publication, this study illustrates that many publicly funded trials simply go unpublished.

The FDAAA requires that most non-Phase I trials of FDA regulated drugs and other medical devices report summary results on ClinicalTrials.gov within 12 months of trial completion, regardless of publication status. Ross argues that ClinicalTrials.gov could provide that public platform for providing timely public access to study results.

Only it doesn’t.

Reporting in a companion article, Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study,” Andrew Prayle and others report that of the 738 clinical trials classified in ClinicalTrials.gov as requiring mandatory reporting within 12 months of trial completion, just 22% had done so. In comparison, trials not covered by the FDAAA had a 10% compliance rate. Industry-funded trials subject to mandatory reporting were more likely to report results over government-funded trials.

Prayle argues that unless the reporting rate increases, federal legislation will not achieve its goal of improving the accessibility of clinical trial results. The real problem that prevents clinical trial data from reaching fellow researcher scientists, clinicians, and the general public, Prayle writes, is the unwillingness of authors and sponsors of these studies to report their results, as required by law:

ClinicalTrials.gov allows dissemination of summary results independent of a publisher. Our study supports the suggestion that study investigators and sponsors act as the principal sources of reporting bias; reporting of results to ClinicalTrials.gov is independent of peer review, manuscript preparation, and editorial priorities.

While scientific journals, and those run them, have become the focus of everything that is wrong with scientific communication today, the reluctance of authors and their sponsors to follow established guidelines — and the government’s inability to enforce its own laws — should be brought into the discussion on how to improve the system.

Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist. https://phil-davis.com/

Discussion

13 Thoughts on "Most NIH-Sponsored Trials Slow to Publish, Many Aren't Published, Most Fail to Report Data, Studies Show"

Just curious, have you compared these two studies to the 2009 Ross article in PLoS Medicine? At first glance I think trial reporting has slightly improved, but I haven’t read the three articles carefully enough to be certain.
Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM, 2009 Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis. PLoS Med 6(9): e1000144. http://dx.doi.org/10.1371/journal.pmed.1000144

The life sciences are not my field and so my impression may be wrong or out of date. Nonetheless, I had been told by a medical journal editor that journals were not interested in publishing negative results, partly because of space limitations. This could be an explanation for the low percentage of published articles vs funded projects; however, it certainly does not explain the low reporting levels mentioned in these articles. Nor does it explain why those clamoring for free access to the published literature are not clamoring for improving the access to the data that were collected with the public’s money. Thanks, Phil, for suggesting that the discussion be broadened.

Many of us who advocate for broader literature access are also advocating for broader data availability. Unfortunately (or fortunately) for now that’s taken a back seat in the on-line conversation to topics like RWA and FRPAA.

And data availability is becoming more important all the time.

Geoffrey Boulton, a Fellow of the Royal Society who is currently leading the Society’s project, Science as a Public Enterprise, argues that since no journal can spare the space to publish the avalanche of data points that large-scale scientific experiments produce, the published paper has become more of an “advertisement” and the “science sits in the underlying data”.

Therefore the data is increasingly necessary to understand the science.

Quoted in An open and shut case? Debating the purposes of open science, a Royal Society PolicyLab meeting (mp3 file).

Hi Phil,

Did they have any data on the proportion of grant recipients that filed final reports with NIH? That must be a much higher proportion, as I think not sending a final report normally disqualifies you from applying for more money. If these reports are available to the public then the results are ‘out there’ in some form…

Like many agencies (but not all), NIH does not make its final contract reports publicly available. Doing so would help immensely.

It’s frustrating that more ‘enforcement’ is not done by funding bodies – either by chasing up non-compliers or not giving them grants next time because they did not do what they said they’d do. Australia, where I am from, is quite lax in this area – researchers are meant to deposit data from publicly funded research into repositories within twelve months, or explain why they can’t or won’t. But nothing is done to non-compliers, who can just line up to get another grant. Until funding bodies start to sanction researchers who don’t comply, what will change? Researchers are incentivised to create peer-reviewed publications, not to manage or share data. And in many disciplines, there is nowhere for the data to go.

But given that a lot of research these days actually resides in the data because the publication cannot encompass all the points the researcher wants to make and all the results they came up with, then hanging on to data is even more heinous. The research risks being duplicated, people may go down blind alleys … Maybe there needs to be a repository for this clinical trials data to go into?

How much of this reluctance to release data is related to Intellectual Property? Under Bayh-Dole, the researcher and their institution owns any IP that results from a federally-funded project. If one intends to patent the results of research, then being forced to give away that information is going to harm one’s business plan. The legal standing here is, as I understand it, questionable. Can the government compel a researcher to give up the IP that he owns? It seems as if there are contradictory requirements here working at odds with one another.

I think there are plenty of intrinsic and extrinsic factors around sharing data. The stick of policy vs. the carrot of remuneration? My guess is carrots win, and the evidence seems to verify that. Scientists are pretty smart — they know what matters for career advancement and financial security. Spilling the beans out of altruism probably takes a back seat to those things.

Phil, for the past four years I have been tracking the progress of Asian research by periodically reviewing the PubMed database. In the course of my periodic reviews I have noticed that there is, at times, as much as 20 months lag between when an article is published and when it is indexed on PubMed. I have been reviewing Asian data since 1997. Every time I update my data I have to query every single year as prior year totals can and do change. I wonder if this lag may have skewed the data a bit. For example, as of a week ago, the total number of articles on PubMed (for 2011) was 976,000. When I queried the data for 2011 at the end of December, the total was only 925,000. The number of articles for 2010 in these two queries changed by approximately 4,000 (between December 2011 and February 2012). The limitations the author’s used would have favored articles published in more prominent journals, therefore the risk of this lag would be less (I have noticed, over the years, that articles published in the “core” journals are archived very quickly). Though I have no doubt that the author’s findings are valid; I wonder if this reporting lag may have had some impact on the results.

Also, in response to Judy’s comments, I have spoken to many editors over the years. All the editors I have spoken with have expressed a concern about negative findings not being submitted for publication. In fact, the point of both the ICMJE and CONSORT requirements for registration of trials was to try to prevent researchers from hiding negative data. That is not to say that editors don’t prefer positive to negative results – but certainly editors are aware of the positive bias and have implemented mechanisms to ensure that negative results are reported.

Comments are closed.