Newspaper advertisements seeking patients and ...
Image via Wikipedia

The notion of “filter failure” has special meaning to medical publishers. If fraudulent studies or bad concepts get into the medical literature, they can linger for years and patients can be harmed. But medical journals are secondary filters — grant-making bodies, institutional review boards, and funding entities largely set the agenda in medical research.

One of the most suspect upstream filters is industry, which by definition supports research that advances commercial ends. Most notoriously, industry has also been guilty of suppressing research results that would work against its profit motives.

In the US and Europe, trial registries have sprung up in an attempt to blunt some suspect practices, namely suppressing studies that don’t pan out. Journals help enforce the registries by refusing to accept studies not registered before they commenced. But enforcement is spotty, and one result has been that industry is increasingly conducting trials in countries without registry requirements (e.g., eastern Europe, some Asian countries). In addition, registries have proliferated, and the confusion, lack of centralization, and inadequate enforcement powers have made them less effective than they might have been.

Recently, a meta-analysis published in the BMJ showed once again that upstream filtering by commercial entities can have severely deleterious effects on patients. In conducting a meta-analysis, a group of researchers found that 74% of the data from clinical trials had been suppressed leading up to the approval of reboxetine for the acute treatment of severe depression. When these missing patient data were added back in, the beneficial effects reported for reboxetine vanished while a host of risks emerged. Essentially, industry pulled a fast one.

The authors hit on the main lesson from this in one succinct sentence, a point that apparently all the editorialists missed:

Our findings underline the urgent need for mandatory publication of trial data.

It brings again into question whether the scientific research publication system has the right controls, levels of filtering, and information tiers to make the system work in the modern world.

There seems to be a new genre of medical publishing emerging — one we haven’t defined except in traditional terms, but one that needs to exist much like arXiv exists in other scientific domains.

The arXiv preprint server acts as an outlet for researchers who wish to have their results subjected to some publishing pressure before possibly going to the literature. It also allows some researchers to circumvent traditional publishing altogether, dispensing with the time-consuming processes around traditional peer-review so they can get back to their benches. With Google and other services acting as discoverability aids in the arXiv, some studies find incredible traction in the mainstream media and within their disciplines.

People publishing in arXiv know that it’s not a journal. People accessing arXiv know that it’s not a journal. The arXiv has neatly created a zone in which research can exist in a useful and clearly delineated triage area, to bring a medical concept to bear.

Medical publishers have not successfully made a similar zone for medical research findings, despite the clear need. Every medical publishing initiative I’m aware of resolves in a journal of some sort. The BMJ tried a pre-print archive years ago, but the timing might have been wrong. It existed before clinical trial registries and strong awareness of how much medical research is suppressed.

Editorialists writing in the BMJ about this alarming meta-analysis show how tied they are to the journal model. In one case, Robert Steinbrook and Jerome Kassirer propose that journals should define full access to all the trial data and require that investigators and journal editors have full access. As they write:

. . . it is time for journals to tighten their standards further. . . . Trust in the medical literature, not just in industry sponsored trials, is at stake.

The BMJ itself misses the opportunity to move from the journal-based model, despite an editorial that clearly invites the shift. In the editorial, Fiona Godlee and Elizabeth Loder make their take-home point a call for submissions to an upcoming “theme issue” (scheduled for late 2011) on the topic of the integrity of the research base. Talk about being slaves to a format.

Regulators from Germany who were involved in finding and assessing the missing data make many recommendations, including:

Legal obligation for manufacturers to provide all requested data to health technology assessment bodies without commercial restrictions to publication.

Publishers and others can help by creating a venue for publication of data, much like arXiv, in which the material is clearly preliminary, the goal is transparency and ease of publication, and the result is that there is no longer any excuse to hide clinical trial data.

What attributes would differentiate the database from the journals?

  • No impact factor
  • No editor or editorial board
  • A careful, unique branding approach
  • No agenda other than completeness
  • A standards-based, non-proprietary technical architecture

By maintaining a journals-centric publishing model, we are leaving in place a huge swath of shady terrain that companies can exploit to craft a commercially favorable but scientifically flawed publication process. Our views into this tangled forest are fleeting at best currently.

Instead of working from a presumption that all we need is “better journals,” medical editors should consider banding together with a new goal in mind — creating a type of data publication that is required prior to publication, yet falls short of validated journal publication. There is no better process for getting information into decent shape than to make it publishable. There is no better way to ensure data availability than to require data publication. Even a database of raw results would be superior to the opaque, incomplete, and fragmented set of disclosures and registrations we have now.

Industry would likely bring complaints about competitive exposure. After all, if I have to reveal not only which trials I’m working on but also the data being generated, my competitors could outflank me. But with the current alternative being long-term public “beta test” trials while drug makers reap billions, this seems an indefensible position. The opacity around industry data is being misused in too many cases.

By creating a home for medical research data — and requiring its attributed publication — journal publishers can perhaps help to correct what is clearly a highly manipulated world of results reporting, with enough mirrors and fog to confound us all. If we really want to bring commercial medical research to heel, we need a new approach. A database capturing all clinical trial data from industry and elsewhere would be a big step in the right direction, and it is entirely feasible.

Improving journals in response to this problem won’t be adequate. We need to improve data reporting. It’s time for medical journal editors to recognize that journals and databases are entirely different things, and that the path to the database should no longer pass through the journal. In fact, the path to the journal should pass through the database.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

10 Thoughts on "The Reboxetine Scandal — How Should We Make Medical Trial Data Available?"

If you really want to “bring commercial medical research to heel” that is the regulators’ job, not the journals.

It’s not an either/or. A combined approach, with something new in the middle requiring all data be deposited, would be stronger.

“74% of the data from clinical trials had been suppressed leading up to the approval of reboxetine for the acute treatment of severe depression.”

Er, not quite. It’s true that the data weren’t published, and I agree that that’s unacceptable. However, all those data were available to the regulators, who made the decision to approve reboxetine in the full knowledge of that data.

So although it’s bad, it’s not quite as bad as you make it sound.

Suppressed is still an applicable word. The issue is how information is intrinsically branded. If information is not published in medical journal articles as the research reporting community is currently structured, it’s entirely likely that it’s viewed as less important. Keeping these data out of medical journals (suppressing it from publication) made the data appear less important or relevant, and gave a much more favorable dataset (26% of the patient data) a prime position.

However, if the information had been deposited in a database and structured for that level of public access, it would be easier to find things like what these researchers found. Data supplied to regulators can be sliced and diced to fit a regulatory review process — stacks of printouts and supplementary files on jump drives don’t do the trick. Data presented in rows and columns in an electronic database accessible to regulators and researchers alike prior to drug approval would probably be much harder to game.

“Suppressed” may or may not be an appropriate word (we’d need to know more about why the data weren’t published to be sure), but “Supressed” in the context of “leading to approval” is misleading. That, whether intentionally or unintentionally, suggests that the data were hidden from the regulators, which is simply not true. The regulators had access to all the trials, published and unpublished.

I’m not defending the non-publication of the data, which I agree is unacceptable. I’m just saying that you need to be careful not to give the impression that something else happened as well.

Some of what you are interested in is required now at clinicaltrials.gov. The basic results data (including outcomes) must be deposited or published one year after the primary outcome has been measured.

The difficulty in allowing papers with no filter is in the interpretation of and conclusions from the data. We pay journal editors to filter the interpretation as well as peer-review the data.

Having a results database solely of tables would help with this problem.

One other sidebar. This is not only a commercially supported trial problem. The NCI recently put out a press release about the Lung Cancer Screening Trial. No data are anywhere to be seen or interpreted and many millions of health care dollars are at stake.

Comments are closed.