It has never been easier to post a comment to a scientific article. Just don’t expect an adequate reply from the author — or one at all — according to a new study of scientific comments left on BMJ articles.
The article, “Adequacy of authors’ replies to criticism raised in electronic letters to the editor: cohort study” was published on August 10th in the medical journal BMJ. Its lead author, Peter C. Gøtzsche, is director of the Nordic Cochrane Centre. Tony Delamothe and Fiona Godlee, both editors of BMJ, were co-authors.
The study reports on articles that received comments and criticisms through BMJ’s Rapid Response, a feature that allows readers to post immediate responses to an article, which are then appended to the online version of the article.
Of the 350 papers that formed their study group, 105 (30%) received substantive criticism; however, less than half of these 105 papers (45%) received any reply from the author, the researchers report. More distressingly, papers that received severe criticism — comments that may serve to invalidate the study completely — were no more likely to garner a response from the author than if the article received only minor criticism. Moreover, when authors did respond to criticism, their critics were generally unsatisfied with the response. Editors, in contrast, were much more satisfied with authors’ responses.
Explaining these differences in perspective, Gøtzsche speculates that editors’ receptive attitude may reflect a defense of their decision to publish the article and a desire to protect the image of the journal. Editors may simply be less qualified than critics to evaluate the suitability of an author’s response.
Providing a companion editorial on the article, David Schriger and Douglas Altman argue that these results provide a strong argument for the existence of an independent letters editor.
Gøtzsche et al. provide several recommendations for journal editors, among them:
- Encouraging authors to respond to criticisms or following up when responses are inadequate;
- Sending comments and replies out to peer-review when the editor lacks sufficient knowledge about the research; or
- Requiring authors to respond to any and all comments as a precondition to manuscript acceptance
Science should involve mechanisms for dialog and self-correction, writes Gøtzsche, who recommends that journal editors consider an online response feature for their journal, if they have not done so already, and not place any time restrictions on the commenting process. He writes:
Science has no “use before” date but evolves through open debate.
The benefits of implementing technology also come with costs, and there is some rationale for placing limits on letters to the editor. Many popular articles receive substantial comments, many of them laudatory, perfunctory, or serving no other purpose than to provide a forum for other researchers to self-promote. In these cases, it may be difficult for future readers to find important criticism. Allowing important voices to be heard may justify the silencing of others.
Still, most papers represent monologues rather than dialogues. The vast majority of scientific papers receive no comments. And while the number of scientific articles grows each year, the number of letters remains the same. In their editorial, Schriger and Altman write:
[A] mountain of poor quality unfocused literature has left its readership fatigued, numb, and passive. . . . Each new paper is another monologue added to the heap. Few read it and fewer care. Errors remain unnoticed or un-noted, and no one seems terribly bothered.
Schriger and Altman argue that inadequate post-publication review cannot be remedied by simply changing the mechanics of public feedback because the problem lies more deeply in the embedded cultures and reward system of researchers. Ultimately, we need a change of culture that places more value on public discussion.
Until then, post-publication review may continue to be spotty and unreliable and an inadequate substitute — although a distinct addition to — peer review.
Discussion
18 Thoughts on "Post-publication Review: Is the Dialog of Science Really a Monologue?"
Sounds like these folks want to force science to go somewhere. I would be very cautious about imposing some vision or other on science. If 30% of articles get substantive comments that is great. The authors should always be informed of these comments, but beyond that nothing should be required. Let the system find its own way.
The stuff about a mountain of poor literature and a numb readership is hyperbolic garbage. It implies an unrealistic vision of how science works.
I don’t think it’s hyperbolic garbage. You’re at the center of a research facility, from what I can tell, so you’re self-selected to be a good researcher. But practitioners in many domains are finding the amount of research being produced to be numbing and a form of overload. There’s too much for them to make sense of, which is why research synthesis tools are becoming more popular. The problem is that this inserts a barrier between research and practice, one that probably won’t be bridged.
When authors don’t reply to readers, that’s just another wedge in the system, and another bit of alienation inserted between the worlds of research and practice.
I am not at a research facility, but I do research on the logic of scientific communication for my principal client — http://www.osti.gov — one of the world’s largest Web aggregators of scientific content.
The idea that there is suddenly now too much stuff to read is precisely hyperbolic nonsense. This has been true for hundreds of years. But I agree about the value of synthesis tools, because that is what I build. What is new is the opportunity to build such tools, not the so-called “overload” which has always existed.
The basic point, however, is that thought is a zero sum game, so if you want researchers to start spending a lot of time replying to comments, what do you propose they stop doing — research? Proposals? Sleeping?
Think of it as cognitive time and motion studies, or a cognitive budget problem. People who want researchers to do something new have to say where the time will come from, what will be lost, as well as demonstrating that the change is worth the loss.
Thanks for the clarification. A constant frustration I’ve heard from editorial sorts is that practitioners aren’t involved enough with the research, so there’s a lag in adoption or even appreciation of new research findings, a certain “unreality” of research itself, and so forth. If our incentives continue to revolve around publish-or-perish, we will continue to have this type of gap. Do we accept it? Can we do better? Is there a way to manage online comments so that the zero-sum game is avoided? (We learned to make letters work.)
I am all for making revolutions work, because that is what I do. But requiring that authors answer all “substantive” comments on their work is not the answer. For one thing there is no time. For another, ignoring one’s critics is often the path to progress.
Note too that if people can’t keep up with the literature now, then adding comments is only going to make things worse. What comments are really good for remains to be seen. This is not something to be settled with new rules. The rules can come when we know what we are doing.
Using the existing model, I agree. My point is that perhaps we could change the model for scholarly online commenting to make it fit with author productivity, practitioner interest, etc. This is about how can the culture and technology interact well. Right now, the technology is too crudely deployed to be very useful to the culture, and the culture is also not tuned to the opportunities the technology might present.
A glance at the comments below this week’s Nature editorial will reveal why the vast majority of scientists feel ‘commenting’ has no value http://bit.ly/aOd6SX
As the BMJ editors note, there is already too much, often poor, literature to wade through. Adding another layer that has an even higher noise level will not help.
I tend to agree with David above about letting the system find its own way rather than trying to change the culture. In the current culture, papers either stand the test of time because subsequent work supports their conclusions – or they don’t and vanish into obscurity, remembered only as unfortunate, temporarily distracting dead ends.
The idea that radically changing the mechanics would somehow ramp up the pace of progress may be wishful thinking.
Rather than a laissez-faire approach as argued by David, what could we actively do to not *change* the culture but *shift* the culture into a better channel. For instance, authors are expected to respond to Letters to the Editor. This is just a slow, analog form of commenting. Forcing authors to reply to online comments isn’t *changing* the culture but *shifting* established cultural expectations into the new world.
The Nature article shows how one bore can dominate a discussion. I wish social media would become more social, in the respect that in addition to thumb-up/thumbs-down ratings, we could have a “throw the bum out” rating, and if enough people thought the bore should leave the party, that person would be shown the door. If he were at a cocktail party or in a meeting, he’d be shushed and/or ostracized seamlessly by normal social interactions.
That Nature article is a great example of what was recently described in the New York Times:
Many professors, of course, are wary of turning peer review into an “American Idol”-like competition…and that know-nothings would predominate. After all, the development of peer review was an outgrowth of the professionalization of disciplines from mathematics to history — a way of keeping eager but uninformed amateurs out. “Knowledge is not democratic…”
“[A] mountain of poor quality unfocused literature has left its readership fatigued, numb, and passive. . . . Each new paper is another monologue added to the heap. Few read it and fewer care. Errors remain unnoticed or un-noted, and no one seems terribly bothered.”
This might be an effective argument if it were unique. But it isn’t. Go to the archives of any library, pick up a 50+ year old journal on a subject you know, look at a full year’s worth of articles and tell me that things were any better back then.
And yet our current high state of science was built on this same base of mostly garbage.
Most people act like history began the day they were born, even when they speak otherwise.
On a technical point, how many readers of the journals in question download the pdf and either read that on a screen or print it out, versus those reading the html version of the papers (where the comments can be found)? Can a commenting system be effective if it’s not seen by the majority of readers?
Fact is that peer review is ultimately a misnomer, and laissez-faire doesn’t apply: publication power is firmly in the hands of editors, not “peers”. The idea of an independent letters editor, in charge of identifying substantial criticisms even if submitted through a web site, should be taken very seriously.
It’s true that bad papers are eventually forgotten, but especially when there’s public policy consequences the process might be maddeningly slow, and the damage done by bad papers quite large and unwarranted in particular regarding those that would not be able to withstand even a web comment’s critique.