Ethics, Experimentation, Peer Review, Research, Social Role, Tools — Too Many Studies Are Registered Late, Published Late, and Smaller Than Planned

Fishing in waders

Fishing in waders (Photo credit: Wikipedia)

For any major medical study, the stakes are high — the results can affect how patients take care of themselves and how physicians treat disease, for years if not decades. Yet all is not well in the land of medical research, judging from a recent analysis of, which finds that the majority of clinical studies are too small to matter in the near-term, are published late, and could be subtly manipulated by researchers, given when they are registered.

Problems like these are yielding undesirable downstream effects — for instance, only 15% of clinical guidelines are based on robust evidence, and there’s ongoing difficulty replicating published results.

But let’s rewind.

In 1997, the US Congress mandated the creation of, a repository for information about ongoing clinical trials. Designed to reveal studies to investigators, reviewers, and patients, it was, by the time it appeared in 2000, one of more than 200 registries in existence. Standards varied, researchers were unclear of their requirements, and finding studies was an excursion into uncertainty.

On top of it all, the pharmaceutical industry resisted registering trials in these repositories, citing competitive concerns as the reason for their reluctance.

In 2005, on the heels of two major scandals in the medical literature — the concealment of adverse trials and data around Paxil, an antidepressant; and Vioxx, an anti-inflammatory found to be associated with undisclosed cardiovascular events — the International Committee of Medical Journal Editors (ICMJE) created a policy requiring the registration of trials at The FDA followed up by requiring registration of “studies in human beings in which individuals are assigned by an investigator based on a protocol to receive specific interventions.” Other major registries were created, beefed up, and synced through the World Health Organization (WHO), creating a robust network of registries, with standards more closely aligned and monitored.

At the time, many felt the ICMJE policy to require proper registration before a paper could be published was an important step in enforcing registration. It set the bar for major studies, and because publication of a randomized controlled trial in a top-tier general medical journal was and remains a major step toward turning a scientific breakthrough into a lucrative clinical application, it was a strong incentive-based enforcement policy.

One goal of the registries was to get all clinical trials registered before enrollment began. Early registration was required to help prevent the manipulation of enrollment, which can occur when researchers notice an early, unanticipated, or unpleasant trend in patients, and consciously or unconsciously modify or shade enrollment criteria to follow an unexpected hot lead or dilute troublesome trends.

The recent analysis published in JAMA reviewed the success of in two time periods —  2004-2007 and 2007-2010. The findings are rather troubling if you want your studies to meet the highest possible standards — between 2007 and 2010, only 48% of eligible trials were registered before patient enrollment had commenced. Granted, this was up from 33% in the earlier period, but it’s still the minority. The majority (52%) were registered after patient enrollment had started.

This means that most clinical trials are being registered after it’s theoretically possible that investigators have had a chance to peek at early data and tweak the enrollment criteria to generate a “better” trial.

Other deficiencies are noted, mostly around incomplete or incoherent records. The overall effect on the registry is one of unfulfilled potential, despite government mandates. In a related editorial, Dickersin and Rennie write that:

. . . despite important progress, is coming up short, in part because not enough information is being required and collected, and even when investigators are asked for information, it is not necessarily provided. As a consequence, users of trial registries do not know whether the information provided through is valid or up-to-date.

According to the FDA, these trials are required to be registered — yet once again we have investigators not fulfilling a government mandate. It’s a little maddening because there are human subjects putting themselves at risk, people being at worst inconvenienced in the name of science, and in some cases suffering untoward events.

Another finding from the analysis is that many studies are smaller than originally planned and therefore underpowered — 96% had fewer than 1,000 patients, and 62% had fewer than 100 patients — leaving one author of the study to note:

. . . . these studies will not be able to inform patients, doctors and consumers about the choices they must make to prevent and treat disease.

That said, small studies are often justified, especially for early-phase drug evaluations, oncology trials (where biomarkers and genetic variation limit study populations), and investigations of biological mechanisms. It’s also not surprising to have a preponderance of early, exploratory studies in a database like this — after all, science is about winnowing hypotheses down. It makes sense there would be more tentative ideas at the early, wide end of the funnel.

In addition to the FDA requiring registration, the agency also requires results be published within one year after a trial ends. Yet, it’s clear this is not being enforced, as the editorialists write:

In another example of unresolved issues, the law regarding the FDAAA reporting rules, aimed at getting trial findings into the public domain even when the study is not published, is not yet being followed routinely or enforced. . . . more than three-quarters (78%; 575/738) of interventional trials in subject to mandatory reporting under the FDAAA, results were not reported in the time frame required by the legislation. Among this cohort of trials, those funded by industry were more likely than those funded by the NIH/government to report results (40% vs 8%).

It’s perhaps surprising that industry has proven more meticulous and diligent about making results available than government-funded researchers.

Here we may be getting into divergent incentives.

Once industry realized that a registry was unavoidable, publication became their new ally, something they could potentially do better than academics. For a company, competition for new research money isn’t the same, and quick publication fits with corporate goals. It’s also easier for the management of a company to direct employee priorities than it is for a distant body like the NIH or FDA to direct an academic researcher — after all, the incentives aren’t the same for academic researchers. The same carrot doesn’t exist (there is no academic credit for registering a trial, and there is no clear benefit for haste in a system with such latency in its incentives and competing demands to get the next grant), and the sticks are not being wielded vigorously by the NIH, FDA, and potentially by some top-tier journals.

It’s clear from this analysis that there’s a big issue the medical research community should be dealing with — most clinical studies aren’t being registered in a manner that best preserves the integrity of the scientific method, while also doing justice to the risks courted by patients in trials.

Until registration is required prior to patient enrollment, too many medical studies can be perceived as elaborate post hoc fishing expeditions rather than pure scientific studies.

Enhanced by Zemanta

About Kent Anderson

I am the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. I’ve worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are my own.


3 thoughts on “ — Too Many Studies Are Registered Late, Published Late, and Smaller Than Planned

  1. Reblogged this on MLibrary Healthy Communities.

    Posted by kmacdoug | May 10, 2012, 9:47 am
  2. Kent,

    According to the ICMJE website, over 1,100 publications have agreed that they will not publish a clinical trial unless it is registered, that the registration contains a minimum set of data and that the trial is registered before the first patient has been recruited. I believe there was a grace period when ICMJE journals did not rigorously apply the rules, allowing researchers time to change procedures so as to comply with the requirement, but that grace period has long since past. I did a little investigative work on both the WHO trial registry site and the PubMed database. I reviewed the data for the period 2009 – 2010 for both sites. Here are my findings.

    According to the PubMed database 105,936 clinical trials were published during the time period 2009-2010. According to the WHO trials registry website, during that same time period, 71,624 trials were registered. If the JAMA author’s data is correct, that would mean that only approximately 34,800 of the trials registered during 2009 – 2010 would be eligible for publication in the ICMJE signatory journals. Though it is true that a clinical trial can produce more than one paper, in order for the current publication rate to be sustained, each registered trial would have to produce roughly three papers. Have you seen any data on the average number of papers produced by a single clinical trial? As interesting as the global data is, the regional data is even more interesting.

    For the time period of my study (2009 – 2010), I got the following counts by region:

    World Total Articles published 105,936 Trials registered 71,624
    USA Articles published 28,923 Trials registered 11,250
    Europe Articles published 37,309 Trials registered 12,711
    Asia Articles published 14,007 Trials registered 10,769

    It seems to me that we are not registering enough trials to sustain the current rate of publication and in 2011 the number of clinical trials published globally actually fell by 12%; which is the largest decline I have observed (I have been tracking the PubMed database back to 1990) in the many years I have been tracking the PubMed database. But even more interesting than the global total is the discrepancy for Europe and the USA. If the authors of the JAMA study are correct (only 48% of trials are registered before patient recruitment), this would mean that only 5,400 of the USA trials are eligible for publication; which means that each trial would have to elicit more than five papers. The data is even more bleak for Europe. Only 6,100 of the registered trials from Europe would be eligible for publication, meaning that each trial would have to produce more than six papers to sustain current publication rates. Is it possible that each trial will produce that many papers or are we about to see a significant decline in the publication rate of ICMJE signatory journals?

    Posted by Mark Danderson | May 10, 2012, 2:37 pm


  1. Pingback: Weekly List Bookmarks (weekly) | Eccentric Eclectica @ - May 12, 2012

The Scholarly Kitchen on Twitter

Find Posts by Category

Find Posts by Date

May 2012
« Apr   Jun »
The mission of the Society for Scholarly Publishing (SSP) is "[t]o advance scholarly publishing and communication, and the professional development of its members through education, collaboration, and networking." SSP established The Scholarly Kitchen blog in February 2008 to keep SSP members and interested parties aware of new developments in publishing.
The Scholarly Kitchen is a moderated and independent blog. Opinions on The Scholarly Kitchen are those of the authors. They are not necessarily those held by the Society for Scholarly Publishing nor by their respective employers.
%d bloggers like this: