ClinicalTrials.gov was launched as a voluntary registry in the 1990s, but the stakes were raised after a number of scandals, leading to the Food and Drug Administration Amendments Act (FDAAA), which in 2005 mandated that clinical trials be registered.
With the passage of this act, clinical researchers in the US were required to register trials and deposit trial information at ClinicalTrials.gov, for all the right reasons – transparency, responsibility, and public benefit. The mandate garnered support from hundreds of journal editors who pledged to only publish research findings registered in the system. In the context of drug trial scandals, mounting retractions, and funding concerns, the goal of a more transparent source of scientific accountability was widely lauded.
Ever since, there have been basic problems, including observed delays in the registration of trials, which some worry may allow for hypotheses to be revised based on preliminary data. This would undermine the database’s spirit and goals, as it was established partially to reduce the ability to modify research hypotheses to match post-hoc findings — something which may be happening. In 2015, a paper in PLOS ONE found that of 137 randomized controlled trials published in the major medical journals (Lancet, JAMA, NEJM, BMJ, and Annals), 15% had the primary outcome changed when compared to the initial trial registration, while pre-specified non-primary outcomes were omitted in 39% and new non-primary outcomes were introduced in 44%.
- That a trial was prospectively registered before patient enrollment began
- The trial report manuscript should be free of all unacknowledged post-hoc outcomes
- All primary outcomes outlined in the trial registration should be reported (whatever the nature of the finding) and clearly identified as a primary outcome
Worryingly, any breach of these standards could potentially mean that the results reported have either consciously (spin, bias, revisionism) or unconsciously (through a lack of methodological rigorousness) been manipulated to highlight (or hide) select findings.
It’s not just neurology where compliance is a headache. Compliance rates also have been low generally for the entire run of the database. Reviewing a 2012 study from the BMJ here, Phil Davis wrote:
While scientific journals, and those who run them, have become the focus of everything that is wrong with scientific communication today, the reluctance of authors and their sponsors to follow established guidelines — and the government’s inability to enforce its own laws — should be brought into the discussion on how to improve the system.
It seems ClinicalTrials.gov is continuing its drift toward irrelevance and obsolescence. Even requirements from the ICMJE that trials are registered in ClinicalTrials.gov before papers can be considered for publication have not headed off a decline in compliance.
The community itself is turning its back on the mandate, it seems. Ask researchers how they feel about the site and about pre-registration requirements in general, and you are on the left side of the bell curve of emotions – mixed emotions at best, outright dismissal and resentment at worst.
So it’s no wonder compliance is falling. According to a study published last year in the New England Journal of Medicine, between 2009 and 2013, compliance among researchers is just over 13%. This is down from the 22% found in the earlier BMJ study Davis summarized. The more recent study found that compliance for industry-funded studies is highest (17.0%), more than double the rate among NIH-funded researchers (8.1%). Other trials comply less than 6% of the time.
Stated another way, only 1,790 out of 13,327 clinical trials conducted between 2008 and 2013 had results reported via ClinicalTrials.gov, leaving 11,537 unreported. How many patients these trials represent isn’t clear, but given that many of these trials were Phase 2 or later, it seems safe to assume that at least 250 patients were involved in each trial on average (this is probably a conservative estimate). That would mean that more than 2.5 million patients participated in unregistered trials. That’s a lot of patients put at risk and a lot of science that’s going undocumented.
The authors of the NEJM study seem barely able to contain their outrage over the situation in their written discussion:
The reporting requirements of the FDAAA reflect the ethical obligation of researchers and sponsors to respect human trial participants through fidelity to commitments made explicit in informed consent: namely, to make results of trials available to contribute to generalized knowledge.
In other words, patients agreed to participate by signing an informed consent agreement, and now researchers are shirking their part of the agreement. But the authors didn’t stop there, and we get a hint of the source of the problems in a following section:
Since the enactment of the law, many companies have developed disclosure policies and have actively pursued expanded public disclosure of data. Curiously, reporting continues to lag for trials funded by the NIH and by other government or academic institutions. Pfizer has reported that the preparation of results summaries requires 4 to 60 hours, and it is possible that the NIH and other funders have been unable or unwilling to allocate adequate resources to ensure timely reporting.
In the analysis of headache trials in the field of neurology, a similar pattern of compliance was found, with 39% of industry trials registered, 27% of academic studies registered, and 0% of studies by government researchers registered.
Two major contributors seem to be inhibiting compliance – an onerous user interface atop a persnickety set of requirements, and a lack of an effective stick or enticing carrot.
Speak to researchers, and their dislike for ClinicalTrials.gov is palpable. They find the site hard to use, technically unreliable, and frustrating. They know they should do it, but they also have learned there’s no real downside if they don’t follow through.
As publishers and editors, we need to be more cognizant of these burdens on researcher workflows all our bright ideas are creating in the Digital Age, as a recent post on NeuroDojo reflects:
When I was in grad school, I had to write a paper and publish it. Now, people are suggesting that I also pre-register my experiments; curate and upload all my raw data (which may be in non-standard or proprietary formats); deposit pre-prints; publish the actual paper in a peer-reviewed journal (because that’s not going away); promote it through social media; upload it into sites like Academia.edu or ResearchGate; update my publication information in databases like ORCID, ImpactStory, and institutional measures; and watch for comments on post-publication peer review sites like PubPeer and engage with them as necessary.
When depositing trial information and results in ClinicalTrials.gov was a single requirement around publication, it might have been feasible. Now, with so many more time sinks around published research, researchers have to pick and choose.
While the time commitment to document trial results via ClinicalTrials.gov is a major barrier to compliance, it’s not the only barrier. The user interface and technology pose other barriers, as well. The site was last redesigned in 2012, and it doesn’t take much to make it throw an error. This means interrupted workflows that invite abandonment. It’s not robust technology that’s up to snuff with the latest models for ease of use or self-healing infrastructure.
As usual, incentives are key. Publishers’ manuscript submission systems can be as Byzantine and difficult as any government web site, yet authors persist through to submission, even wrestling with multiple different systems in order to get published. The incentive to do so is strong, so the barrier is not an impediment. It is just a pain in the neck.
But for ClinicalTrials.gov, the incentives are lacking. On the one hand, there is no major benefit to the researcher for reporting results into the system. On the other hand, there is no strong penalty for failing to do so. As the NEJM authors write:
Penalties that have been established by the FDAAA include publication at ClinicalTrials.gov or “failure to submit” notifications and lists of sanctions that have been imposed, including civil penalties of up to $10,000 per day and loss of funding to the NIH. No enforcement has yet occurred . . .
The difference in compliance between industry researchers and those funded by the NIH or other bodies is worth contemplating. In speaking to researchers, it’s clear that the issue is one of resources and motivation. Industry not only has greater incentives to comply – they took a beating in the run-up to the establishment of the registry, and want to be good citizens more than ever – but also have more resources to devote to compliance. Academic or government labs have fewer resources, more people taking cuts of their funding as it flows into the institutions, and less structured management to ensure accountability.
Overall, this seems like a real problem for clinical research, public accountability, and public access in the US and elsewhere. Many different issues come to mind:
- Does the NLM have the ability to make a system that’s easier to use, and more reliable?
- Is the NIH funding trials and managing resource allocation adequately to ensure that money is available to support this mandate?
- Where are the “public access” demands for trial registration?
- Are open data initiatives going to suffer a similar, frustrating fate?
- Why is there no enforcement after a decade?
- Why do researchers find it more convenient to drop research (no registration, no publication)?
- Where are the universities in this?
Overall, the story of ClinicalTrials.gov seems one of missed opportunities. What seemed a promising form of transparency and accountability is not working. It has been badly executed from a workflow and technical standpoint, as users will attest. It now competes in a crowded space of pre- and post-publication requirements and demands. More importantly, the incentives to comply are missing, and the community has been taught that there are no benefits or consequences related to compliance.
Then we have a recent study showing that less than half of all clinical trials undertaken actually see publication. There are two lessons here: first, less than half of funded biomedical science is ever published; and, second, publication provides a significantly better carrot for researchers, like it or not.
The cause of public access to research results is clearly more complicated than changing publisher practices. Researchers are not complying (or are finding it too difficult to comply) with government mandates to register and document their clinical trials, and more than half of biomedical research results are never published.
What kind of public access is that?