was launched as a voluntary registry in the 1990s, but the stakes were raised after a number of scandals, leading to the Food and Drug Administration Amendments Act (FDAAA), which in 2005 mandated that clinical trials be registered.

With the passage of this act, clinical researchers in the US were required to register trials and deposit trial information at, for all the right reasons – transparency, responsibility, and public benefit. The mandate garnered support from hundreds of journal editors who pledged to only publish research findings registered in the system. In the context of drug trial scandals, mounting retractions, and funding concerns, the goal of a more transparent source of scientific accountability was widely lauded.

Ever since, there have been basic problems, including observed delays in the registration of trials, which some worry may allow for hypotheses to be revised based on preliminary data. This would undermine the database’s spirit and goals, as it was established partially to reduce the ability to modify research hypotheses to match post-hoc findings — something which may be happening. In 2015, a paper in PLOS ONE found that of 137 randomized controlled trials published in the major medical journals (Lancet, JAMA, NEJM, BMJ, and Annals), 15% had the primary outcome changed when compared to the initial trial registration, while pre-specified non-primary outcomes were omitted in 39% and new non-primary outcomes were introduced in 44%.

A recent examination of trials in neurology found that only 5% of all trials studying headache satisfied three main conditions of registration:

  1. That a trial was prospectively registered before patient enrollment began
  2. The trial report manuscript should be free of all unacknowledged post-hoc outcomes
  3. All primary outcomes outlined in the trial registration should be reported (whatever the nature of the finding) and clearly identified as a primary outcome

In the journal Headache, the executive editor writes:

Worryingly, any breach of these standards could potentially mean that the results reported have either consciously (spin, bias, revisionism) or unconsciously (through a lack of methodological rigorousness) been manipulated to highlight (or hide) select findings.

It’s not just neurology where compliance is a headache. Compliance rates also have been low generally for the entire run of the database. Reviewing a 2012 study from the BMJ here, Phil Davis wrote:

While scientific journals, and those who run them, have become the focus of everything that is wrong with scientific communication today, the reluctance of authors and their sponsors to follow established guidelines — and the government’s inability to enforce its own laws — should be brought into the discussion on how to improve the system.

It seems is continuing its drift toward irrelevance and obsolescence. Even requirements from the ICMJE that trials are registered in before papers can be considered for publication have not headed off a decline in compliance.

The community itself is turning its back on the mandate, it seems. Ask researchers how they feel about the site and about pre-registration requirements in general, and you are on the left side of the bell curve of emotions – mixed emotions at best, outright dismissal and resentment at worst.

So it’s no wonder compliance is falling. According to a study published last year in the New England Journal of Medicine, between 2009 and 2013, compliance among researchers is just over 13%. This is down from the 22% found in the earlier BMJ study Davis summarized. The more recent study found that compliance for industry-funded studies is highest (17.0%), more than double the rate among NIH-funded researchers (8.1%). Other trials comply less than 6% of the time.

Stated another way, only 1,790 out of 13,327 clinical trials conducted between 2008 and 2013 had results reported via, leaving 11,537 unreported. How many patients these trials represent isn’t clear, but given that many of these trials were Phase 2 or later, it seems safe to assume that at least 250 patients were involved in each trial on average (this is probably a conservative estimate). That would mean that more than 2.5 million patients participated in unregistered trials. That’s a lot of patients put at risk and a lot of science that’s going undocumented.

The authors of the NEJM study seem barely able to contain their outrage over the situation in their written discussion:

The reporting requirements of the FDAAA reflect the ethical obligation of researchers and sponsors to respect human trial participants through fidelity to commitments made explicit in informed consent: namely, to make results of trials available to contribute to generalized knowledge.

In other words, patients agreed to participate by signing an informed consent agreement, and now researchers are shirking their part of the agreement. But the authors didn’t stop there, and we get a hint of the source of the problems in a following section:

Since the enactment of the law, many companies have developed disclosure policies and have actively pursued expanded public disclosure of data. Curiously, reporting continues to lag for trials funded by the NIH and by other government or academic institutions. Pfizer has reported that the preparation of results summaries requires 4 to 60 hours, and it is possible that the NIH and other funders have been unable or unwilling to allocate adequate resources to ensure timely reporting.

In the analysis of headache trials in the field of neurology, a similar pattern of compliance was found, with 39% of industry trials registered, 27% of academic studies registered, and 0% of studies by government researchers registered.

Two major contributors seem to be inhibiting compliance – an onerous user interface atop a persnickety set of requirements, and a lack of an effective stick or enticing carrot.

Speak to researchers, and their dislike for is palpable. They find the site hard to use, technically unreliable, and frustrating. They know they should do it, but they also have learned there’s no real downside if they don’t follow through.

As publishers and editors, we need to be more cognizant of these burdens on researcher workflows all our bright ideas are creating in the Digital Age, as a recent post on NeuroDojo reflects:

When I was in grad school, I had to write a paper and publish it. Now, people are suggesting that I also pre-register my experiments; curate and upload all my raw data (which may be in non-standard or proprietary formats); deposit pre-prints; publish the actual paper in a peer-reviewed journal (because that’s not going away); promote it through social media; upload it into sites like or ResearchGate; update my publication information in databases like ORCID, ImpactStory, and institutional measures; and watch for comments on post-publication peer review sites like PubPeer and engage with them as necessary.

When depositing trial information and results in was a single requirement around publication, it might have been feasible. Now, with so many more time sinks around published research, researchers have to pick and choose.

While the time commitment to document trial results via is a major barrier to compliance, it’s not the only barrier. The user interface and technology pose other barriers, as well. The site was last redesigned in 2012, and it doesn’t take much to make it throw an error. This means interrupted workflows that invite abandonment. It’s not robust technology that’s up to snuff with the latest models for ease of use or self-healing infrastructure.

As usual, incentives are key. Publishers’ manuscript submission systems can be as Byzantine and difficult as any government web site, yet authors persist through to submission, even wrestling with multiple different systems in order to get published. The incentive to do so is strong, so the barrier is not an impediment. It is just a pain in the neck.

But for, the incentives are lacking. On the one hand, there is no major benefit to the researcher for reporting results into the system. On the other hand, there is no strong penalty for failing to do so. As the NEJM authors write:

Penalties that have been established by the FDAAA include publication at or “failure to submit” notifications and lists of sanctions that have been imposed, including civil penalties of up to $10,000 per day and loss of funding to the NIH. No enforcement has yet occurred . . .

The difference in compliance between industry researchers and those funded by the NIH or other bodies is worth contemplating. In speaking to researchers, it’s clear that the issue is one of resources and motivation. Industry not only has greater incentives to comply – they took a beating in the run-up to the establishment of the registry, and want to be good citizens more than ever – but also have more resources to devote to compliance. Academic or government labs have fewer resources, more people taking cuts of their funding as it flows into the institutions, and less structured management to ensure accountability.

Overall, this seems like a real problem for clinical research, public accountability, and public access in the US and elsewhere. Many different issues come to mind:

  • Does the NLM have the ability to make a system that’s easier to use, and more reliable?
  • Is the NIH funding trials and managing resource allocation adequately to ensure that money is available to support this mandate?
  • Where are the “public access” demands for trial registration?
  • Are open data initiatives going to suffer a similar, frustrating fate?
  • Why is there no enforcement after a decade?
  • Why do researchers find it more convenient to drop research (no registration, no publication)?
  • Where are the universities in this?

Overall, the story of seems one of missed opportunities. What seemed a promising form of transparency and accountability is not working. It has been badly executed from a workflow and technical standpoint, as users will attest. It now competes in a crowded space of pre- and post-publication requirements and demands. More importantly, the incentives to comply are missing, and the community has been taught that there are no benefits or consequences related to compliance.

Then we have a recent study showing that less than half of all clinical trials undertaken actually see publication. There are two lessons here: first, less than half of funded biomedical science is ever published; and, second, publication provides a significantly better carrot for researchers, like it or not.

The cause of public access to research results is clearly more complicated than changing publisher practices. Researchers are not complying (or are finding it too difficult to comply) with government mandates to register and document their clinical trials, and more than half of biomedical research results are never published.

What kind of public access is that?

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


11 Thoughts on "Why Is Still Struggling?"

Great article, but please don’t perpetuate the “50% of all trials go unpublished” zombie statistic. The paper you cite looks only at trials completed between 2007 and 2010, and for which the responsible reporting party was a US academic medical center. Hardly “all clinical trials undertaken”, and 66.5% of these studies had either posted results on or had published in a journal, so also nearer 2/3 than 1/2. Still not enough though, I agree.

You’re right, that line probably deserves a post all its own, as there is a lot of nuance to be had. But the problem is real. As the authors of the study write:

Our cross sectional examination of academic medical centers in the United States, including the nation’s most productive research institutions, showed poor performance for disseminating the results of completed clinical trials through publication in peer reviewed biomedical journals or reporting of results on Only 29% (1245/4347) of completed clinical trials conducted by the faculty at major academic institutions were published within two years of study completion and only 13% (547/4347) reported results on

A metric oddly left out of most of these discussions is the number of patients affected/put at risk. Because the study mentioned here involved major academic medical centers, it’s entirely plausible that more patients were put at risk in this subset of trials. Perhaps it’s a sign that we’re more focused on papers than patients.

I used to teach a course in regulation writing for US Federal regulators. My central message was that words on paper were not the product; compliance was the product. Rules on paper are cheap and easy, but behavior is difficult and expensive. It sounds like no one is paying the bill for compliance. The result is not surprising.

Noncompliance is of course a major point of pain for patients and patient advocates, who hang on every update and scour each entry trying to find the right match. From that user perspective, the lack of standard nomenclature and the variability in presentation order and detail is a further burden. Even worse is noncompliance in reporting outcomes. The patient community knows to seek out meeting reports on trials (usually interim results) from ASCO and other clinician societies. For those who have participated in trials, however, noncompliance in reporting final outcomes – even disappointing ones – is a breach of the covenant. Lives are on the line when people volunteer for a trial in more ways than one, and though we know that odds of direct benefits to participants are modest, the belief that we are advancing research is enough to tip the balance.

Seems federal grant review process (NIH, AHRQ, etc) for RCTs would include review of, in addition to MEDLINE, for pre-existing RCTs before deeming a proposed trial as likely to provide a unique scientific contribution. Thus, relevant, but unpublished trials would put a hold on new trials till the earlier data is published. Likewise, a researcher with unpublished data from federal funding would not be eligible for new funding. While admittedly federal funding only affects a portion of trials, this would be a start in the right direction.

You might be interested in another trend with that is concerning. An increasing number of listings in their database are not even traditional clinical trials, but rather are for profit efforts charging patients for participation to receive unproven stem cell therapies.
Also see my interview with the Director Zarin of


If the study sponsor delays posting results on until the primary publication is ready, does that block individual investigators from publishing?

Comments are closed.