Two Straws
Two Straws (Photo credit: Theory)

This Tuesday, PLoS announced the launch of “The Reproducibility Initiative,” and did so with an admission of something not quite approaching success:

When PLOS ONE launched in 2006, a key objective was to publish those findings that historically did not make it into print: the negative results, the replication studies, the reanalyses of existing datasets. Although everyone knew these studies had value, journals would rarely publish them because they were not seen to be sufficiently important. PLOS ONE sought to become a venue for exactly these types of studies. As it happened, however, the submissions were not hugely forthcoming, although we have published a few.

We’ll get back to the important part of this admission, which has to do with incentives and their relationship to value, a little later.

As announced, the Reproducibility Initiative has a number of factors to it, many of which seem like yet more ways to tap research funds rather than ways to solve a legitimate problem for scientists.

The pitch is that scientists can check whether their experiments are reproducible using an “objective” lab and experts from various institutions, all sworn to secrecy. To do this, participants are paired up with the Science Exchange, which calls itself “The Scientific Services Marketplace.” It seems to work like a contract research organization (CRO), charging scientists to perform certain assays and tests. Prices are listed prominently on the site (an RNA Microassay is $107.50 per sample, for instance), and can be shopped by provider. If you pass the reproducibility test, you get a certificate. The sample on the site made me laugh — it’s a certificate praising Watson and Crick for having validated their methods and results.

The Science Exchange is funded by a group of venture capitalists, another bit of Silicon Valley money injecting itself into the scientific publishing world.

In a nifty bit of sleight of hand, the site claims it’s free with this lovely bit of logic:

No cost for providers to list services ● No cost for researchers to request services ● Easy to pay any provider via payment platform

I love free things that provide easy ways to pay.

The Reproducibility Initiative itself is run on the Science Exchange servers, and uses the Science Exchange domain (https://www.scienceexchange.com/reproducibility) once it redirects from http://www.reproducibilityinitiative.org. It’s a secure site, another sign of its commercial goals.

Essentially, the Reproducibility Initiative seems like a feeder program for Science Exchange providers, who likely provide Science Exchange with a percentage of their revenues as part of the no-cost listing agreement.

PLoS seems to be participating in order to create a feed into its publishing programs, assuring participants that any studies coming from the program will be published:

PLOS will publish all studies validated through the Reproducibility Initiative in a special open access Reproducibility Collection. This will provide you with a second independent publication for your replicated results in PLOS ONE.

“A second independent publication?” Aside from the use of the word “independent,” this sounds like redundant publication, but since novelty or positive results aren’t a pressure PLoS ONE thinks scientists should face, I guess they’re fine with it. Funders might have some issue with paying for experiments and publication twice, however.

There also seems to be a whiff of exaggeration about the level of support the initiative has, with ArsTechnica boasting that the project is “supported by Nature, PLoS, and the Rockefeller University Press.” In fact, the support coming from Nature and RUP is about the weakest kind you can get in publishing — “linking to the PLOS ONE Reproducibility Collection for studies that were originally published in their member publications.”

The Reproducibility Initiative is accepting 40-50 studies to start with, and part of the application process is to divulge the available budget (a required field on the form). Studies “will be selected on the basis of potential clinical impact and the scope of the experiments required.”

Exactly who is screening these submissions isn’t clear. The criteria are loose (“Providers will test for: the reproducibility of your study’s methodology; validation of your study’s results”), and judging by the wording, it seems like a study might be shown to provider after provider until one wants to test it. Not to worry though — it’s all confidential.

There is also a stated belief that “high-profile studies” will be more likely to use this set of services. Most high-profile studies have been reproduced, if they can be; many high-profile studies, especially medical studies involving patient populations, can’t be reproduced exactly, and have to depend on statistical tests to validate their findings. There is no mention of how these studies might be handled, so I think this focus on “high-profile studies” is puffery. The Science Exchange partners seem to focus on basic bench studies and processes, not large-scale studies or highly original techniques.

John Ioannidis, who is quite skeptical about the validity of many studies and who is on the Reproducibility Initiative advisory board, published a list of the problems plaguing studies in a 2005 PLoS Medicine article:

. . . a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.

Basically, little of what Ioannidis articulates can be addressed by the resources Science Exchange can bring to bear on any reproducibility task. In fact, by Ioannidis’ standards, this initiative might add to the problem by creating “more teams . . . in chase of statistical significance” and “greater financial and other interest and prejudice.”

Fundamentally, this seems an odd way to approach the reproducibility problem. One could argue that science itself takes care of the reproducibility problem naturally through the publication of subsequent findings, which either confirm (not interesting) or refute (interesting) published findings.

Paying to reproduce findings after publication in order to get another publication event seems pretty cynical, both on the part of the scientist and on the part of the publisher of either redundant or conflicting results. And the logic of the PLoS part of the process is hard to follow. Let’s say I publish a paper with a decent set of results, strong enough for me to sign my name to the conclusions I propose. Then, for whatever reason, I want to pay another anonymous scientist to reproduce the results after publication. What is my incentive to do this? I have a publication I’ve signed my name to. Either it will be confirmed or refuted. If I feel the results will only be confirmed, what incentive do I have to waste my time and my funder’s money? If I feel the results could or would be refuted, I have a deeper problem with my research claims.

To me, it’s problematic for a publisher to sell access to what it claims are high-quality peer-review and selection criteria on the one hand, then to also sell opportunities to repeat experiments and republish results on the implicit assumption that something different might emerge.

Whatever you think of the ethical dimension, practically this gets back to what I noted at the beginning about incentives. Apparently, the thought behind PLoS ONE was that journals weren’t willing to publish negative results, replication studies, or reanalyses of existing datasets. What they should be learning from their lack of submissions of this type, despite a relatively low bar for publication, is that it’s not journals that weren’t willing to publish these studies — it was scientists who were unwilling to waste valuable time on publishing negative results, replication studies, or reanalyses of existing datasets.

There is also the problem of authorship. Let’s say a 20-author paper is published, and then the authors want to run it through the Reproducibility Initiative. And, let’s say, the findings are so compelling that they require a radically different publication event. What happens to the Reproducibility Initiative workers who discovered the problem — either methodological or technical? Are they now authors on the new paper?

The Reproducibility Initiative takes the inherent barriers to negative, replication, or reanalysis studies and adds further barriers — the cost to do more tests, the potential loss of confidentiality by sharing data with unknown scientists at other institutions, and another cost of publication. How this advances things isn’t clear at all to me.

The Reproducibility Initiative has some mitigating features, the most prominent of which is a strong Advisory Board. Of course, an advisory board’s role doesn’t necessarily mean much in the long run if it’s just window dressing. Governance is probably coming directly from Science Exchange.

I’ve been trying to put my finger on something in all the recent “modern” approaches to scholarly and scientific publishing. Right now, the closest I can come is that it’s all about “reinventing the wheel, but with more jargon and steps.” Alt-metrics measure themselves against the impact factor, and celebrate correlation as validation; “publish then filter” publishing adds a step on top of a filtering step that the approach implicitly elevates but which is more efficiently accomplished by “filter then publish”; and post-publication peer-review is thought useful because, at its best, it helps differentiate research a bit in quality and relevance, two things pre-publication peer-review already does but in a more efficient, controlled, and disciplined manner.

So, it’s not surprising that a new initiative, called the “Reproducibility Initiative,” is another “reinvent the wheel, but with more jargon and steps” approach, with most of the steps aimed at taking more money out of research funds. In this case, what it’s proposing to reinvent is science itself. But this time, with certificates.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

8 Thoughts on "The Reproducibility Initiative — Solving a Problem, or Just Another Attempt to Draw on Research Funds?"

Kent, thank you for this thoughtful piece on the reproducibility initiative. I wanted to elaborate on the position of the Rockefeller University Press regarding the initiative, as this has been less than clear in some of the press about its launch. I stated explicitly in an email to Elizabeth Iorns, CEO of Science Exchange, that the Rockefeller University Press cannot endorse this initiative due to various shortcomings. The only “support” we provided was for the right of our authors to choose to have their studies independently replicated after publication, and, as you correctly indicated, we would be willing to provide a link in the original paper to any resulting publications.

Thanks Mike. It seemed to require clarification. Science Exchange is definitely throwing around a lot of brands as part of their PR approach.

Kent,
Thank you for your interest in our new initiative. I understand that you have been quite critical of PLOS in the past and do not wish to engage in old arguments about the value of open-access publishing models or the need for journals to take responsibility for publishing replications as well as negative results, all of which we obviously support. However, as the Co-Founder of Science Exchange, I would like to correct a number of specific points in your post that directly attack my company and my motivations for instigating the Reproducibility Initiative.
Science Exchange is not a CRO. We connect scientists with other scientific experts – mostly, in fact, at university core facilities – that have the skills required to perform particular types of experiments. We do not hide the fact that we charge a transaction fee to operate our business – so do most online marketplaces like Expedia or OpenTable or AirBnB; and few argue about the value they provide. There is no logical fallacy in saying that it is free to list with Science Exchange, free to use Science Exchange as a resource to evaluate providers and costs, and that we will only charge for successful transactions.
The fact is that there is a big problem in the reproducibility of scientific research and I don’t pretend that we have a complete solution to the problem. But, as a breast cancer researcher, I have had first-hand experience in wasting time and money trying to replicate high profile publications that could not be reproduced. I have talked with a number of scientific leaders (including those on our scientific advisory board) who agree with the scope of the problem. I believe our initiative can make a positive impact on this problem by making available a network of scientific and technical experts to replicate potentially important research and by openly acknowledging studies that can be independently replicated. And what makes this initiative possible is the existence of Science Exchange – I do not hide that fact either – in fact, I am proud of it.
Studies submitted to the Reproducibility Initiative will be selected by our advisory board, which you can see listed here: http://www.reproducibilityinitiative.org. I am sure they would not respond kindly to the idea that they are “window dressing”.
The selection criteria are no less clear than on most journals’ guides to authors – maximizing potential impact and within the scope of our services – we will match experiment types to those core facilities with the right expertise in a blinded fashion. You are quite correct that we cannot reproduce “highly original techniques” – we will focus on studies for which the experimental techniques are well established but where the potential impact is nonetheless high. And yes, as professionals, the scientific service providers on our site have agreed to maintain confidentiality of the studies they replicate. It is my understanding that when a journal “shops around” a study to reviewers, the guarantees of confidentiality are much less secure.
Finally, and to address Mike Rossner’s comment as well, we cannot be held responsible for what Ars Technica writes (unfortunately, Jon Timmer did not contact us for information before writing his article), but the wording of our press release was vetted by all publishers involved, including Dr. Rossner, who suggested modifications to the language which we readily adopted. In all our communications with interested reporters and bloggers, we have honored the spirit and the letter of our agreements with publishers who have engaged with us. We regret that you chose not to ask us directly about any of the “issues” you have raised in this article before expounding upon them.
Yours sincerely,
Elizabeth Iorns

Nobody is claiming reproducibility isn’t a problem. But this seems to me to be an inadequate solution designed to make money from researchers by dipping twice into both the study and publication budgets. As noted in the post, many of the problems with reproducibility are deeper than anything this solution can even hope to address.

As for attacks on your business, these were questions and speculation, all of which are fair game when there isn’t a lot of clarity. Phrases like “it seems to work like a contract research organization (CRO)” isn’t calling it a CRO, but drawing an analogy; and “an advisory board’s role doesn’t necessarily mean much in the long run if it’s just window dressing” is something I’m sure you’d agree with.

As for the media coverage, you can’t be accountable for Ars Technica, but the prominent display of Nature and RUP logos in the manner you have them on your site makes their involvement seem more like an endorsement than it seemed at least RUP intended. You also claim on your home page that this has the support of “top academic journals,” clearly a reference to Nature and RUP. I think you’re toe is on the line on this one.

This initiative sounds very promising, and any attention drawn to the reproducibility problem is good attention. However, if reproducibility is so important to PLoS ONE, it’s curious that haven’t they introduced a data archiving policy that makes all authors put their data on a public archive or as supp mat at acceptance. Such a policy would allow the research community to evaluate the reproducibility of all 17,000 papers they publish this year, and not just the 50 or so papers that end up in this initiative.

Comments are closed.