A common goal in efforts to reform scholarly communications is the elimination of inefficiencies from the process. Last week, Tim Vines wrote about eLife’s new peer review efforts meant to address the “review-reject cycle”, which Tim described as, “one of the big inefficiencies in the current system.” While everyone in the research community would appreciate more efficiency, it is important to recognize the value of redundancy in research and evaluation processes. Redundancy is a feature of the system, not a bug.

Twin women in different moods

A monoculture makes for an unhealthy ecosystem, so the idea of passing the entire research literature through a single, centralized authority for evaluation (which journals would then use to make their accept/reject decisions) would not serve the community well. Yes, we do waste a lot of time and energy on researchers who submit their manuscripts to inappropriate journals and then repeat the process down the ladder until they find the right fit. It is often difficult for a researcher to fairly evaluate their own work, and the problem is further fueled by researcher evaluation systems that offer tangible incentives for the ranking of the journal where the paper is eventually accepted.

That said, most researchers have experienced submitting a paper to a journal and feeling like the review process was unfair, or that the chosen peer reviewers just didn’t quite “get” the value of their project. In a case like this, it’s important to have alternatives, another place to go for a review, and another authority to lead that review process.

In such a case, the author wants the second review process to be as objective as possible, and to look at the manuscript with a set of fresh eyes, free from the bias that would be introduced by sharing the initial negative (and perceived as unfair) comments from the first set of reviewers. This is one of the reasons why offers to share reviews of rejected papers between journals are often rejected. Starting from scratch does create extra work for journals and for reviewers, but it is necessary to offset problematic reviews, and to help papers find the right outlets and the right audiences.

If all papers in the life sciences are submitted to bioRxiv as preprints as was suggested, and those articles are then publicly triaged by the editorial staff of one particular journal or publisher, then the ability for objective further review is greatly reduced. Reviewers would have to actively avoid reading preprints and these pre-review reviews on bioRxiv if they wished to approach review assignments without bias.

This concern is similar to objections raised in the early days of megajournals – if one follows the megajournal concept to its logical end, the market would eventually consolidate down into one huge megajournal that publishes everything deemed methodologically valid. An author whose paper was rejected would have nowhere else to go, and thus be shut out of the conversation. Having lots of outlets allows authors to ensure being treated fairly.

Another area where redundancy comes into play is in efforts to drive the publication of negative results. Here we see calls for researchers to create detailed accounts of failed and incorrect approaches, to help prevent future researchers from independently repeating those same mistakes. Again, the goal is efficiency, reducing the waste of repeating dead end experiments and projects. But here also there is value in redundancy. When an experimental approach fails to yield results, it is impossible to know whether the approach/methodology is flawed, or whether the failure was due to the researcher’s inability to adequately execute the methodology.

Let’s stipulate a potential cure for cancer, a great idea, but at the research bench, the scientist accidentally made their cell culture buffers using sodium hydroxide instead of sodium chloride. Maybe their favorite song shuffled up on their headphones while they were mixing up reagents and the distraction caused them to grab the wrong bottle off the shelf. The resulting change in conditions for the cells caused the treatment to fail. The resulting report would be identical to a report on the same process where the reagents were made correctly, but the treatment itself was flawed, rather than the implementation. If this is part of the literature, and no one else ever tries to repeat this approach, then we’ve lost a potentially major health breakthrough.

The perceived “reproducibility crisis” is driving calls for performing confirmatory experiments that repeat important published results. If we’re going to skeptically approach positive results, why would be be any more willing to accept negative results without confirmation? If one has to be repeated, then shouldn’t the other, and then doesn’t that do away with any gains in efficiency?

Redundancy also occurs when multiple groups work on resolving the same research questions. Though seemingly inefficient, a multitude of varied approaches will increase the likelihood of success. Aside from the rampant sexism and ethically-challenged behavior found within, one of the most surprising aspects of Watson’s The Double Helix is that he and Crick were initially told not to work on the structure of DNA because another research group was already working on that project and it wouldn’t be fair to them. To the modern scientist, this seems unfathomable. Many efforts to approach a problem will fail, but this doesn’t mean that they are without value, or that they’re a redundancy that should be eliminated from research.

We should similarly recognize the value of redundancy elsewhere in the system. One of the most positive aspects of eLife’s  peer review proposal is that it would greatly increase the amount of redundancy in the peer review process. It calls for constant evaluation and re-evaluation of papers, an exponential expansion of the amount of time and effort that goes into peer review. While I question the pragmatic feasibility of this approach, the recognition that peer review produces subjective opinions, not objective evaluations, and that those opinions depend on the individual reviewer and will vary over time, shows the value of building redundancy into the peer review process. Getting things right is more important than getting things done efficiently.

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

4 Thoughts on "The Value of Redundancy in Research, or, In Research, Redundancy Has Value"

To me, eLife is built on contradictions: It strove to be a selective journal to compete with the likes of Nature, Science, and Cell, built a bespoke publishing system run by elite scientists and headed by a Nobel Laureate. At the same time, it espouses an anti-elite, populous view of science, where all research, given that it is deemed ‘sound’ is worthy of publication. Is this a purposeful model or a view into a chaotic administration trying to find its way?

Redundancy. I could get by with one kidney, or one eye, or one ear, or one arm and they are like one of so many redundancies found in the make-up of the human body! My what an inefficient organism. Yet the redundancies resulting from evolution much like the redundancies that evolved in scientific communication seem to make for a robust system.

Good point, David.

You point out the contradiction between calls for preventing researchers from going down blind alleys by publishing negative results and the drive to confirm positive results.

Surely that’s not an argument against publishing negative results, is it? If we say that both negative and positive results ought to be confirmed, wouldn’t it be best to publish both types?

Yes, I agree that it is important to publish negative results (with each journal holding the studies to the same standards of rigor and significance as they do positive results). I’m just pointing out both types of publications are equally open for challenge.

Comments are closed.