In science, data is good. So more data should be better?
Not according to the Society for Neuroscience, which has decided to stop accepting and hosting supplemental data with journal articles.
Announcing their decision in the August 11th issue of the Journal of Neuroscience, Editor-in-Chief John Maunsell explains that the rationale was not about space and cost, but scientific integrity. Supplemental materials have begun to undermine the peer review process.
Validating supplementary data adds to the already overburdened job of the reviewer, Maunsell writes. Consequently, these materials do not receive the same degree of rigorous review, if any at all. At the same time, the journal certifies that they have been peer-reviewed.
Since 2003, when the Journal of Neuroscience began accepting supplemental data, the average size of these files has grown exponentially and is rapidly approaching the size of the articles themselves.
With few restrictions on space, reviewers may place additional demands on authors, requiring them to perform and add new analyses and experiments to the supplemental data. Often these additions are “invariably subordinate or tangential,” Maunsell maintains, but represent significant work from the author and thus delay the publication process. Supplemental data thus changes the expectations of both author and reviewer, leading to what he describes as an “arms race:”
Reviewer demands in turn have encouraged authors to respond in a supplemental material arms race. Many authors feel that reviewers have become so demanding they cannot afford to pass up the opportunity to insert any supplemental material that might help immunize them against reviewers’ concerns.
In the August 11th issue of the Journal of Neuroscience, 19 of the 28 original articles contained supplemental materials, suggesting that they have become normal parts of the publication process.
Yet, having two places to report methods, analyses, and results compromises the narrative of the article as a “self-contained research report,” Maunsell argues. Instead of a clear scientific story (e.g. this is the problem, this how we approached it, this is what we found and why it is important), the article becomes a kitchen sink, a collection of related, but often unimportant details, attached to a clear story.
As Maunsell explains, there are no viable alternatives to simply ending the practice of accepting supplemental materials. Limits on the number of additional tables and figures are simply arbitrary; stating that only “important” additions be included makes enforcement impractical. The journal sees no alternative to ending the practice entirely. This doesn’t mean that authors cannot host their own supplementary data, with links from the article — the journal simply will not vouch for their validity or persistence. Rationalizing the decision, Maunsell writes:
We should remember that neuroscience thrived for generations without any online supplemental material. Appendices were rare, and used only in situations that seemed well justified, such as presentation of a long derivation.
The decision to end a seven-year practice of accepting supplemental scientific data is fascinating if viewed within the larger framework of science policy that has been dominated lately by frameworks of openness and transparency.
Recent developments such as Open Notebook Science, Science Commons, Open Source, Open Access, and Open Government preside under the notion that more access to data allows for greater efficiency and greater accountability. What it ignores, however, is that access is not enough.
Readers seek filters that attest to a source’s validity (the backlash over WikiLeaks’s publication of thousands of unverified documents may signal that the established news organizations are still valued for their ability to verify the authenticity of facts and stories). The decision of a journal to cease publishing supplementary materials affirms the same position that all the facts of science should pass through the same filter.