For any major medical study, the stakes are high — the results can affect how patients take care of themselves and how physicians treat disease, for years if not decades. Yet all is not well in the land of medical research, judging from a recent analysis of ClinicalTrials.gov, which finds that the majority of clinical studies are too small to matter in the near-term, are published late, and could be subtly manipulated by researchers, given when they are registered.
Problems like these are yielding undesirable downstream effects — for instance, only 15% of clinical guidelines are based on robust evidence, and there’s ongoing difficulty replicating published results.
But let’s rewind.
In 1997, the US Congress mandated the creation of ClinicalTrials.gov, a repository for information about ongoing clinical trials. Designed to reveal studies to investigators, reviewers, and patients, it was, by the time it appeared in 2000, one of more than 200 registries in existence. Standards varied, researchers were unclear of their requirements, and finding studies was an excursion into uncertainty.
On top of it all, the pharmaceutical industry resisted registering trials in these repositories, citing competitive concerns as the reason for their reluctance.
In 2005, on the heels of two major scandals in the medical literature — the concealment of adverse trials and data around Paxil, an antidepressant; and Vioxx, an anti-inflammatory found to be associated with undisclosed cardiovascular events — the International Committee of Medical Journal Editors (ICMJE) created a policy requiring the registration of trials at ClinicalTrials.gov. The FDA followed up by requiring registration of “studies in human beings in which individuals are assigned by an investigator based on a protocol to receive specific interventions.” Other major registries were created, beefed up, and synced through the World Health Organization (WHO), creating a robust network of registries, with standards more closely aligned and monitored.
At the time, many felt the ICMJE policy to require proper registration before a paper could be published was an important step in enforcing registration. It set the bar for major studies, and because publication of a randomized controlled trial in a top-tier general medical journal was and remains a major step toward turning a scientific breakthrough into a lucrative clinical application, it was a strong incentive-based enforcement policy.
One goal of the registries was to get all clinical trials registered before enrollment began. Early registration was required to help prevent the manipulation of enrollment, which can occur when researchers notice an early, unanticipated, or unpleasant trend in patients, and consciously or unconsciously modify or shade enrollment criteria to follow an unexpected hot lead or dilute troublesome trends.
The recent analysis published in JAMA reviewed the success of ClinicalTrials.gov in two time periods — 2004-2007 and 2007-2010. The findings are rather troubling if you want your studies to meet the highest possible standards — between 2007 and 2010, only 48% of eligible trials were registered before patient enrollment had commenced. Granted, this was up from 33% in the earlier period, but it’s still the minority. The majority (52%) were registered after patient enrollment had started.
This means that most clinical trials are being registered after it’s theoretically possible that investigators have had a chance to peek at early data and tweak the enrollment criteria to generate a “better” trial.
Other deficiencies are noted, mostly around incomplete or incoherent records. The overall effect on the registry is one of unfulfilled potential, despite government mandates. In a related editorial, Dickersin and Rennie write that:
. . . despite important progress, ClinicalTrials.gov is coming up short, in part because not enough information is being required and collected, and even when investigators are asked for information, it is not necessarily provided. As a consequence, users of trial registries do not know whether the information provided through ClinicalTrials.gov is valid or up-to-date.
According to the FDA, these trials are required to be registered — yet once again we have investigators not fulfilling a government mandate. It’s a little maddening because there are human subjects putting themselves at risk, people being at worst inconvenienced in the name of science, and in some cases suffering untoward events.
Another finding from the analysis is that many studies are smaller than originally planned and therefore underpowered — 96% had fewer than 1,000 patients, and 62% had fewer than 100 patients — leaving one author of the study to note:
. . . . these studies will not be able to inform patients, doctors and consumers about the choices they must make to prevent and treat disease.
That said, small studies are often justified, especially for early-phase drug evaluations, oncology trials (where biomarkers and genetic variation limit study populations), and investigations of biological mechanisms. It’s also not surprising to have a preponderance of early, exploratory studies in a database like this — after all, science is about winnowing hypotheses down. It makes sense there would be more tentative ideas at the early, wide end of the funnel.
In addition to the FDA requiring registration, the agency also requires results be published within one year after a trial ends. Yet, it’s clear this is not being enforced, as the editorialists write:
In another example of unresolved issues, the law regarding the FDAAA reporting rules, aimed at getting trial findings into the public domain even when the study is not published, is not yet being followed routinely or enforced. . . . more than three-quarters (78%; 575/738) of interventional trials in ClinicalTrials.gov subject to mandatory reporting under the FDAAA, results were not reported in the time frame required by the legislation. Among this cohort of trials, those funded by industry were more likely than those funded by the NIH/government to report results (40% vs 8%).
It’s perhaps surprising that industry has proven more meticulous and diligent about making results available than government-funded researchers.
Here we may be getting into divergent incentives.
Once industry realized that a registry was unavoidable, publication became their new ally, something they could potentially do better than academics. For a company, competition for new research money isn’t the same, and quick publication fits with corporate goals. It’s also easier for the management of a company to direct employee priorities than it is for a distant body like the NIH or FDA to direct an academic researcher — after all, the incentives aren’t the same for academic researchers. The same carrot doesn’t exist (there is no academic credit for registering a trial, and there is no clear benefit for haste in a system with such latency in its incentives and competing demands to get the next grant), and the sticks are not being wielded vigorously by the NIH, FDA, and potentially by some top-tier journals.
It’s clear from this analysis that there’s a big issue the medical research community should be dealing with — most clinical studies aren’t being registered in a manner that best preserves the integrity of the scientific method, while also doing justice to the risks courted by patients in trials.
Until registration is required prior to patient enrollment, too many medical studies can be perceived as elaborate post hoc fishing expeditions rather than pure scientific studies.