Chapman as Brian Cohen in Life of Brian
Image via Wikipedia

Supplemental data files published alongside research articles are now common features in online medical journals. At the same time, online commenting has become a rare event, and several prominent journals have shut down the feature entirely, a new study of medical journals reports.

The article, Use of the Internet by Print Medical Journals in 2003 to 2009: A Longitudinal Observational Study,” by David Schriger and others, appears in the February issue of the Annals of Emergency Medicine.

Tracking a cohort of 138 high-impact medical journals (both general and clinical specialty titles) over a period of seven years, Schriger reports that the number of articles with supplemental files grew from 7% in 2003 to 25% in 2009, largely in part to the inclusion of additional tables and figures. For some journals, like Cancer Research, more than 70% of articles appeared with supplemental material in 2009. And while other types of media (like audio, video, datasets, and protocols) are still rare, they are appearing more frequently over time.

Not all journals welcome the trend to include more and more supplemental data, however. Last year, the Journal of Neuroscience announced it was ending their policy to include supplemental files, citing that the trend put too much burden on peer-reviewers and slowed down the publication process.

In contrast to the trend to support and publish more supplemental data, journals that had implemented rapid response — a feature, which allows readers to post comments alongside articles — are showing no growth in participation. Only 18% of articles published in journals that provide the feature  include any comments, and if they did, their numbers are low — 2 on average, Schriger reports.

Three journals (BMJ, the Canadian Medical Association Journal, and Annals of Internal Medicine) buck the trend, with 50% or more articles receiving at least one online comment; however, five journals (Lancet, Gut, Thorax, Pediatric Research, and Respiratory Research) subsequently dropped the rapid response feature not long after implementation, Schriger writes. Realizing that online comments were being dominated by a small group of vocal “bores,” motivated by self-aggrandizement and extreme prejudice, BMJ implemented strict oversight of its rapid response feature in 2005.

With some notable exceptions, the expected raucous debate over most medical research has been deafeningly quiet. If rapid response speeds up the process of communication and frees up additional space that was limited in print journals to formal letters to the editor, Schriger maintains, why have most medical journals failed to successfully adopt this feature?

The answer that many of us keep returning to, in order to explain why most scientists eschew Web 2.o, social networks, science blogging, open review, and post-publication review, is that authors gain very little professionally from online commenting. Schriger explains,

The medical community may not be excited about routinely participating in post-publication review because of lack of interest, qualification, or time. Perhaps another factor is that rapid responses, unlike printed letters, are not indexed on PubMed. They are thus not considered to be “real” publications and do not contribute to the assessment of an individual’s output.

Until the embedded cultures of science change and start rewarding public dialog, it is naive to believe that scientists just need more time to feel comfortable with public debate. Culture always trumps technology, yet there are real consequences to the state of scientific knowledge — and more importantly, public welfare — when scientists actively avoid public debate.  Schriger maintains,

This lack of support for the rapid response feature is disappointing. It may suggest that readers of research articles are accepting articles at face value despite considerable evidence of widespread deficiencies of publications.

In defense, could Schriger be asking too much from the Internet? From a production standpoint, the Internet has greatly sped up the publishing process. From a distribution standpoint, the Internet provides far more access than what print could achieve.  Its expansive space has allowed additional text, tables, figures, datasets, videos, and simulations to be published alongside primary articles. Does it also need to transform articles into active social media?

All right, but apart from the sanitation, medicine, education, wine, public order, irrigation, roads, the fresh water system and public health, what have the Romans ever done for us? — Monty Python’s “Life of Brian”

Like ranting about what the Romans ever did for the people of Judea, is it possible that we’re expecting too much from online journals?

Enhanced by Zemanta
Phil Davis

Phil Davis

Phil Davis is a publishing consultant specializing in the statistical analysis of citation, readership, publication and survey data. He has a Ph.D. in science communication from Cornell University (2010), extensive experience as a science librarian (1995-2006) and was trained as a life scientist.


15 Thoughts on "How the Internet Changed Medical Journals"

Schriger’s disappointment implies a preconceived notion of how science works and how it needs to change. Both may be wrong. Many Web 2.0 advocates seem to be reformers with relatively extreme views. That there are “widespread deficiencies” for example, which public commenting should correct.

David makes a good point. As I have pointed out before, one problem with the proposed ‘revolution in scientific communication’ is that many of the revolutionaries don’t write, referee or publish papers – in short, they aren’t scientists. They do, however, have a notion of how science is done and seek to apply Web 2.0 to it.

This may be add odds with the workflow of a practicing scientist, who rapidly adopts tools where they are useful (email, cell phones, Web 1.0, etc.) but eschews those that are time sinks or lack obvious utility (Web 2.0, blogging).

I’m trying to think of any other online technologies that were made available to all and heavily publicized, and then sat idle for 5 to 10 years before suddenly being adopted. I can’t come up with any. It strikes me that most of the communication methods which have become ingrained in people’s daily routine did so fairly rapidly. Myspace had 100 million users within 3 years. Facebook hit that milestone in less than 2 years after opening accounts to people outside of academia. Twitter appears to have taken about a year to catch on.

Is it realistic to think that researchers are going to have some magical 180 degrees change in attitude toward online commenting after declining to do so for nearly a decade? As Richard Sever’s comment above points out, scientists quickly adopt technologies that are obvious and provide clear benefits. Maybe it’s time to stop barking up this tree and try a different approach.

So, what does this tell us about the prospects for the success of post-publication peer review? Not very encouraging, I’d say.

I don’t think in necessarily invalidates the concept of post-publication peer review. I do think it shows that the currently in-vogue method, simply putting up comment space after papers and assuming the mythical “crowd” will rush to comment and review, is not working.

Likely, post-publication peer review is going to be somewhat like pre-publication peer review, a process that needs to be managed and driven. If the research community concludes that this is valuable, as they have for pre-publication peer review, then they’ll find a way to fund this oversight.

It’s too bad that Schriger et al couldn’t include the Annals of Family Medicine in their analysis, since we began publishing in May 2003. Our eletter submissions are great! Certainly more than half of articles have at least one comment.
I have often wondered how BMJ manages to get so many eletters – they really stand out from the pack!

Thank you for this post. I recommend interested readers look at Jason Priem’s blog post on this topic a couple months ago, which includes additional analyses of his on PLoS articles’ comments.

Speaking as a working scientist, we comment on papers all the time, HOWEVER we do that by publishing other papers that actually (try to) push the field forward, rather than just generating idle chatter on the internet. A comment or ‘talkback’ on-line that is not backed up by experiments or new data is not likely to be taken seriously by the field. Talk (especially e-talk) is very very cheap in terms of time and thought invested. Actually contributing something worthwhile to the scientific debate requires a commitment that is orders of magnitude higher than that. As long as doing real research requires such efforts, there is no chance that it will be communicated via rapid commentary on the web.

Having criticized authors for not participating in post-publication evaluation of their papers, I suspect I should respond to these comments.

The underlying motivation for our paper was a strong suspicion that among the many masters the clinical medical literature serves, the advancement of knowledge is down towards the bottom of the list. For more on this you might look at an editorial that Doug Altman and I wrote on this topic.

Inadequate post-publication review of medical research

which includes the following paragraph:
“Finally, the volume and quality of scientific papers may contribute to the problem—a mountain of poor quality unfocused literature has left its readership fatigued, numb, and passive. Each year more papers are published than the year before (about 500 000 research papers were added to Medline in 2009), but the number of letters stays the same.12 Each new paper is another monologue added to the heap. Few read it and fewer care. Errors remain unnoticed or un-noted, and no one seems terribly bothered. ”

I completely agree with Mike_F that research is a slow, detailed process that really can’t be told in 160 character Tweets yet suspect that there may be important unstated flaws or consequences of papers that could be easily and promptly fleshed out on line. This doesn’t have to happen for all, or even most, papers. If it never happens, however, one has to wonder whether anyone is out there paying any attention whatsoever or whether there is no community of readers, just a community of authors each writing for academic promotion, self-promotion, funding, etc, but not for science.

one has to wonder whether anyone is out there paying any attention whatsoever or whether there is no community of readers, just a community of authors each writing for academic promotion, self-promotion, funding, etc, but not for science.

Most papers in science are largely ignored and this is demonstrated by negligible download and citation data, which gives support to the notion that most of science publishing serves a vanity (author) market, rather than a reader market. I don’t see this as evidence that science (as an organized pursuit) is dysfunctional, but rather the opposite: The limited attention in the market is focused toward those studies that make a difference and advance our understanding of science.

This focusing of attention could not happen without some system for concentrating and promoting the most important articles, for without it, readers would spend far too much time attempting to identify papers worth reading and little time actually reading. The most effective of current such systems is still the journal –although it doesn’t have to be– and in the top strata of titles, dialog and debate is alive and well.

For this reason, I am more optimistic.

I object to equating lack of readership with poor quality. If others do not find my results useful it is no reflection on the quality my work. It may even be largely a matter of luck, in several different ways. Quality is something one can control, but importance is not.

As for the growing number of papers, that is dwarfed by the growing power of search engines, so there should be no problem.

My view is that any work worth doing (or at least worth funding) is worth reporting, because there is no way to know who might use it, now or over the next 20 years. But none of this has anything to do with post-publication review that I can see. There is no problem to solve. Commenting is for debating controversial findings, not for improving poor ones, whatever that might mean.

David Schriger – Commentary is not a holy grail and Science is not a popularity contest nor a fashion show. To mention just one example, Gregor Mendel’s main publication in 1866 in Verhandlungen des naturforschenden Vereins Brünn had little impact and was cited about three times over the next thirty-five years. He was subsequently posthumously recognized as the originator of modern genetics…

Like others here, I find this line of reasoning somewhat problematic. The number of papers published seems an odd target of blame here, at best an attempt to treat the symptoms, rather than the disease. If the objective is to lower the number of papers published, the solution is obvious: call for a drastic cut in funding, eliminate tenured positions and stop accepting so many new students. This will result in fewer people choosing science as a career, fewer experiments being performed and fewer papers being published. Problem solved?

I tend to think that increased research output is a positive thing, a reflection that more people than ever in more places than ever are doing scientific research. Gain of knowledge is progressing at a greater rate than ever. Much of science is incremental though, and not every paper can be above average. That does not mean that small, incremental gains are without value.

Yes, this does make keeping up with the literature more difficult. Then again, as David Wojick points out, our tools for analyzing information continue to improve. Are today’s saved searches, eTOC alerts and RSS feeds really that much worse than digging through the latest issue of Current Contents and hoping to find something of relevance? Since the published paper is the currency of the land, any attempts to prevent people from publishing essentially prevents them from having a career in science. Do we really want to drive researchers away from the bench in order to make our reading lists shorter?

A lack of online comments on papers should not be seen as a lack of analysis of those papers. 25 years ago, when there were no online journals, there were, of course, no online comments either. Does that mean that scientists of the time didn’t carefully critique and discuss the literature? Were pre-internet researchers “just a community of authors each writing for academic promotion, self-promotion, funding, etc, but not for science?” Researchers do indeed pay careful attention to published papers and tear them apart in journal clubs, in lab meetings and in private conversations with trusted colleagues, just as they’ve always done.

The reason these conversations are not taken to a public sphere is a fairly practical one. We tell our teenagers to take care in what they post to their Facebook pages, as embarrassing moments can damage their future career prospects. The lack of critical comments on research articles is a sign that scientists are well aware of the permanent and searchable nature of such comments, and the potentially damaging impact they may have. A simple test—would you advise your student to post a scathing critique of your department chair? Of the head of the study section for your next grant? Of the chair of the search committee where they’re applying for a faculty position? We like to think of science as some revered objective process, but the reality is that as long as humans are involved, human nature will rule.

The failure of online commenting stems from a failure to create a compelling reason to publicly participate, and worse than that really, not only is it not compelling, it’s actually potentially damaging.. It should also be noted that not all post-publication review methods have failed. Editorially-driven systems like Faculty of 1000 seem to have no problem generating thoughtful commentaries. Perhaps better solutions lie in this realm, in treating post-publication review like pre-publication review, making it into an organized and managed process, rather than relying on volunteerism.

Comments are closed.