The Guardian recently caused a bit of a commotion by changing its article commenting system to a threaded format (replies to a comment are now listed directly below that comment). In response to the controversy, Chris Elliot, the Guardian’s Readers’ Editor, wrote a column that contains an interesting piece of previously undisclosed information:
The Guardian website publishes around 600,000 comments a month, with 2,600 people posting more than 40 comments a month.
Martin Belam then did the math. He extrapolated out from that initial figure to get a sense of how well the article comments represents the reading community.
- 2,600 people posting at least 40 comments a month means totals at least 104,000 comments, or at least 17% of total comments.
- That leaves, at most, 496,000 comments per month to be left by everyone else.
- The Guardian’s total audience for November, 2012 was 70,566,108 readers.
- The Guardian’s comments then, at best, represent 0.7% of the audience.
- At least 17% of the Guardian’s comments come from 2,600 people or 0.0037% of their readers.
Those numbers are likely overestimations of the community’s involvement in commenting as they assume no prolific commenter left more than 40 comments (Belam notes that in December, he could find some 1,500 comments left by a group of four prolific commenters alone), and that every other comment was left by a different individual who only wrote one comment.
What does this mean, then, for altmetrics approaches based around the public conversation inspired by a research article? If we assume that these sorts of numbers translate from a newspaper website to a journal website — given the paucity of comments left on articles and the consistency of the 90:9:1 rule, I don’t think this is a huge stretch — then should a tiny and likely non-representative population be allowed to drive the criteria for funding and career advancement in research?
How should post-publication comments play a role, if any, in the metrics used to judge the quality of an article and a researcher’s work? Blogging about academic research, tweeting links to research papers, and commenting on articles remains a fringe activity (as does using Twitter or blogging in general). These sorts of activities cater to the extremes, to people who either have an agenda they’re looking to promote, or just to the minority of people who enjoy communicating in this manner.
As was recently discussed, the posts here in the Scholarly Kitchen in 2012 that drew the most comments were not the same as the most-read posts. There is a qualitative difference between ideas that are controversial versus ideas that are of great interest to the majority of a community. Comments seem to correlate better with the former than the latter.
This post is not meant to disparage the value of comments — they can be tremendously useful ways to exchange information, to correct problems in an article, to add new information, and to turn things into a conversation. This can benefit the reader, the author, and the commenter. But whether that value can be translated into a meaningful measure of the article and researcher performance remains an open question. The fact that comments come from such a tiny and likely non-representative minority of readers makes the challenge even greater.